title
stringlengths 1
827
⌀ | uuid
stringlengths 36
36
| pmc_id
stringlengths 6
8
| search_term
stringclasses 18
values | text
stringlengths 0
6.94M
|
---|---|---|---|---|
Preoperative Ketamine Gargle for Prevention of Postoperative Sore Throat After Tracheal Intubation in Adults: A Meta-Analysis | 821680bb-2dad-4254-8661-99f609713b65 | 11824847 | Surgical Procedures, Operative[mh] | Postoperative sore throat (POST) is a known complication of endotracheal intubation (ETI) under general anesthesia, with an incidence ranging from 28% to 80% . Although POST is self-limiting . it can lead to postoperative complications and significant discomfort for patients. Various nonpharmacological and pharmacological methods have been employed to alleviate POST. Among nonpharmacological approaches, methods such as using smaller endotracheal tubes, lubricating the endotracheal tube with water-soluble gel, adequate relaxation prior to intubation, gentle suctioning of the oropharynx, minimizing cuff pressure, and deflating the cuff completely before extubation have been shown to reduce the incidence of POST . On the other hand, pharmacological measures include inhalation of steroids and other drug gargles. Ionotropic N-methyl-D-aspartate (NMDA), alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA), and kainate receptors are present in the central and peripheral nervous systems . Studies have suggested that activation of these receptors can contribute to nociceptive behavior and inflammatory pain . Furthermore, experimental research has demonstrated that peripherally administered NMDA receptor antagonists are involved in the analgesic and anti-inflammatory cascade response mediated by opioid receptors and NMDA receptor antagonists located in the oral and upper respiratory mucosa, as well as the interaction with cytokine production, inflammatory cell regeneration, and inflammatory mediators . However, there is currently conflicting evidence regarding whether ketamine (an NMDA receptor antagonist) gargle can reduce throat pain in patients after ETI . The aim of this study is to explore the potential of ketamine gargle in reducing throat pain in patients after ETI through the meta-analysis. 2.1. Study Design and Registration The protocol for this study was registered in the International Prospective Register of Systematic Reviews (CRD: 42024517271). The reporting of this study followed the guidelines outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) to ensure comprehensive and transparent reporting . 2.2. Study Selection Criteria Our study had the following inclusion criteria: (1) patients who underwent tracheal intubation under general anesthesia, (2) intervention with ketamine gargle, (3) study designs that included randomized controlled trials (RCTs), (4) the primary outcome being sore throat at 24 h after the operation, and (5) assessment of sore throat using a four-point scale tool (0: no sore throat; 1: mild sore throat [complains of sore throat only on asking]; 2: moderate sore throat [complains of sore throat on his/her own]; and 3: severe sore throat [change of voice or hoarseness, associated with throat pain]). The exclusion criteria were: (1) absence of a placebo control group, and (2) publications not in the English language. 2.3. Search Strategy A comprehensive literature search was conducted by a researcher (FSH) in the following electronic databases from their inception: PubMed, Cochrane Library, Web of Science, ScienceDirect, Scopus, and ClinicalTrials.gov. The searches were performed on November 11, 2023. The search keywords are as follows: “postoperative sore throat,” “ketamine gargle,” “tracheal intubation.” Only studies written in English and involving human subjects were included. In addition, an additional search was conducted on PubMed and Google Scholar to identify articles that investigated the use of ketamine gargle in relation to POST. The reference lists of included articles were manually searched to identify any potentially missed studies from the systematic search (Search strategy see ). 2.4. Study Selection and Data Extraction Two reviewers (FSH and BJ) used the inclusion criteria to independently screen the titles and abstracts in the Rayyan systematic review application. Full-text studies were assessed for inclusion by two reviewers (FSH and BJ). Disagreements regarding the inclusion of abstracts and full-text articles were resolved through discussion among another reviewer (FYX) and the senior author (MYS) . Three reviewers (FSH, BJ, and FYX) independently extracted data from the approved full-text studies. Extracted data consisted of study name, year of publication, participants' demographics, study design, surgery type, incidence of sore throat within 24 h (0, 2, 4, 8, and 24 h) after surgery, doses of ketamine used, POST scoring tool, anesthesia time, postoperative analgesia, and size of the tracheal tube. 2.5. Quality Assessment of Studies Study quality was assessed by two investigators (FSH and BJ) based on the type of study. For RCTs, the Cochrane Collaboration's tool was utilized to evaluate bias across six domains: selection bias, performance bias, detection bias, attrition bias, reporting bias, and other bias . Overall certainty of evidence was evaluated with the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach using the GRADEpro Guideline Development Tool (Software). McMaster University and Evidence Prime, 2024. Available from gradepro.org. Trial sequence analysis (TSA) was conducted to establish the effect of sample size on the study's further research. The primary indicators of study quality included clear identification of the study population, outcomes, and outcome assessment, no selective loss of patients during follow-up, and identification of important confounders and/or prognostic factors. Any conflicts were resolved by a third reviewer (FYX). 2.6. Data Analysis We utilized Cochrane Review Manager Version 5.4 to conduct the meta-analysis employing the random effects model or the fixed effects model and statistical significance was determined at the 2-sided p < 0.05 level for all outcomes. When I 2 was < 50%, we used the fixed effects model, otherwise, we used the random effects model. The odds ratios (ORs) (OR < 1, indicating that ketamine gargle is a protective factor) and mean difference (MD) were calculated for dichotomous and continuous outcomes, respectively . 95% confidence interval (CI) (CI indicated the degree to which the true value of this parameter has a certain probability of falling around the measurement result) was also calculated. In cases where continuous outcomes were reported as medians with interquartile range, we converted them to means and standard deviations using the method proposed by Wan et al. Statistical heterogeneity was assessed using the I 2 statistic. Visual analysis of funnel plots and meta-regression were used to assess publication bias. Sensitivity analysis and meta-regression were performed using Stata Statistical Software 18 (StataCorp., Texas, United States of America). The protocol for this study was registered in the International Prospective Register of Systematic Reviews (CRD: 42024517271). The reporting of this study followed the guidelines outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) to ensure comprehensive and transparent reporting . Our study had the following inclusion criteria: (1) patients who underwent tracheal intubation under general anesthesia, (2) intervention with ketamine gargle, (3) study designs that included randomized controlled trials (RCTs), (4) the primary outcome being sore throat at 24 h after the operation, and (5) assessment of sore throat using a four-point scale tool (0: no sore throat; 1: mild sore throat [complains of sore throat only on asking]; 2: moderate sore throat [complains of sore throat on his/her own]; and 3: severe sore throat [change of voice or hoarseness, associated with throat pain]). The exclusion criteria were: (1) absence of a placebo control group, and (2) publications not in the English language. A comprehensive literature search was conducted by a researcher (FSH) in the following electronic databases from their inception: PubMed, Cochrane Library, Web of Science, ScienceDirect, Scopus, and ClinicalTrials.gov. The searches were performed on November 11, 2023. The search keywords are as follows: “postoperative sore throat,” “ketamine gargle,” “tracheal intubation.” Only studies written in English and involving human subjects were included. In addition, an additional search was conducted on PubMed and Google Scholar to identify articles that investigated the use of ketamine gargle in relation to POST. The reference lists of included articles were manually searched to identify any potentially missed studies from the systematic search (Search strategy see ). Two reviewers (FSH and BJ) used the inclusion criteria to independently screen the titles and abstracts in the Rayyan systematic review application. Full-text studies were assessed for inclusion by two reviewers (FSH and BJ). Disagreements regarding the inclusion of abstracts and full-text articles were resolved through discussion among another reviewer (FYX) and the senior author (MYS) . Three reviewers (FSH, BJ, and FYX) independently extracted data from the approved full-text studies. Extracted data consisted of study name, year of publication, participants' demographics, study design, surgery type, incidence of sore throat within 24 h (0, 2, 4, 8, and 24 h) after surgery, doses of ketamine used, POST scoring tool, anesthesia time, postoperative analgesia, and size of the tracheal tube. Study quality was assessed by two investigators (FSH and BJ) based on the type of study. For RCTs, the Cochrane Collaboration's tool was utilized to evaluate bias across six domains: selection bias, performance bias, detection bias, attrition bias, reporting bias, and other bias . Overall certainty of evidence was evaluated with the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach using the GRADEpro Guideline Development Tool (Software). McMaster University and Evidence Prime, 2024. Available from gradepro.org. Trial sequence analysis (TSA) was conducted to establish the effect of sample size on the study's further research. The primary indicators of study quality included clear identification of the study population, outcomes, and outcome assessment, no selective loss of patients during follow-up, and identification of important confounders and/or prognostic factors. Any conflicts were resolved by a third reviewer (FYX). We utilized Cochrane Review Manager Version 5.4 to conduct the meta-analysis employing the random effects model or the fixed effects model and statistical significance was determined at the 2-sided p < 0.05 level for all outcomes. When I 2 was < 50%, we used the fixed effects model, otherwise, we used the random effects model. The odds ratios (ORs) (OR < 1, indicating that ketamine gargle is a protective factor) and mean difference (MD) were calculated for dichotomous and continuous outcomes, respectively . 95% confidence interval (CI) (CI indicated the degree to which the true value of this parameter has a certain probability of falling around the measurement result) was also calculated. In cases where continuous outcomes were reported as medians with interquartile range, we converted them to means and standard deviations using the method proposed by Wan et al. Statistical heterogeneity was assessed using the I 2 statistic. Visual analysis of funnel plots and meta-regression were used to assess publication bias. Sensitivity analysis and meta-regression were performed using Stata Statistical Software 18 (StataCorp., Texas, United States of America). 3.1. System Retrieval A total of 109 studies were identified, but only 10 RCTs met the inclusion criteria and were included in our study . Among the excluded studies, 48 were excluded due to duplication. After reviewing the titles and abstracts, 40 studies were excluded. An additional 11 studies were excluded after a full-text examination. Specific reasons for exclusion can be found in . 3.2. Basic Characteristics of Included Studies A total of 593 adults (male/female: 295/298) with ASA Grade I-II were included in this study. The participants consisted of adults undergoing various types of surgeries, including pelvic/abdominal elective surgery, septorhinoplasty, abdominal and orthopedic surgery, ear surgeries, and elective surgery of unspecified types. Among the 10 studies included, 5 studies involved gargling 50 mg of ketamine in 29 mL of normal saline for 30 s , 3 studies used 40 mg of ketamine dissolved in 30 mL of normal saline for 30 s , 1 study utilized gargling 50 mg of ketamine in 30 mL of normal saline for 30 s , and 1 study involved gargling 50 mg of ketamine in 29 mL of normal saline for 40 s . All included studies employed the 4-point scale to evaluate POST (Supporting in ). 3.2.1. Main Outcome: Incidence of Sore Throat at 24 h After Operation A total of 10 studies reported on the incidence of sore throat 24 h after surgery . The meta-analysis demonstrated that the ketamine gargle was associated with a significantly reduced occurrence of 24 h sore throat compared to the placebo groups (OR: 0.36; 95% CI: 0.25–0.51; p < 0.00001) . There was no significant heterogeneity observed between the studies ( I 2 = 0%; p =0.44). Following sensitivity analysis, our results remained statistically significant (OR: 0.36; 95% CI: 0.25–0.51) (Supporting in ). These findings indicate the robustness of our results. 3.2.2. Secondary Outcome: Subgroup Analysis of Different Time Points Within 24 h Three studies reported on the occurrence of sore throat immediately after surgery (0 h) . The forest plot demonstrated that the ketamine gargle was effective in reducing POST at 0 h compared to the placebo groups (OR: 0.14; 95% CI: 0.04–0.47; p =0.002) . However, there was heterogeneity observed among the studies and potential bias in the results, so cautious interpretation was necessary ( I 2 = 67%; p =0.05). Four studies included the outcome of sore throat at 2 h postoperatively . The meta-analysis indicated that ketamine gargling significantly reduced the incidence of sore throat at 2 h after surgery (OR: 0.30; 95% CI: 0.17–0.52; p < 0.0001) . Furthermore, there was no substantial heterogeneity among the studies, suggesting the reliability of our results ( I 2 = 31%; p =0.23). Six studies reported on sore throat at 4 h postoperatively , and the meta-analysis demonstrated that the ketamine gargle was associated with a decreased risk of sore throat at this time point compared to the placebo groups (OR: 0.32; 95% CI: 0.20–0.52; p < 0.00001) . No statistically significant heterogeneity was observed among the studies ( I 2 = 0%; p =0.65), indicating the reliability of our results. Four studies reported on sore throat at 8 h postoperatively . Our analysis revealed that ketamine gargle was also associated with a lower risk of sore throat at 8 h after surgery (OR: 0.40; 95% CI: 0.23–0.70; p =0.001) . There was no statistically significant heterogeneity observed among the studies ( I 2 = 29%; p =0.24). 3.2.3. Secondary Outcome: Anesthesia Time Four studies reported on anesthesia time . Our meta-analysis revealed that ketamine gargle did not result in a significant reduction in anesthesia time (min) (MD: −1.16; 95% CI: −6.44–4.11; p =0.67) (Supporting in ). Furthermore, there was no notable heterogeneity observed among the studies ( I 2 = 0%; p =0.89), indicating the reliability of the results. 3.3. Risks of Bias and Publication Bias Overall, the differences in risk of bias among the studies were minimal. Three studies were classified as having a high risk of bias due to inadequate blinding, while one study had a high risk of bias due to incomplete data . There was a potential risk of bias in three randomized control trials. On the other hand, three randomized control studies were considered to have a low risk of bias and demonstrated good overall quality . The grade ratings for all outcomes are shown in . The funnel plot of the included studies in this meta-analysis demonstrated overall symmetry, indicating no evidence of publication bias . Meta-regression based on sample size showed no significant publication bias in this study ( p =0.86) (Supporting in ). TSA showed that this meta-analysis reached both the traditional and TSA boundaries and was able to definitely obtain significant results (Supporting in ). A total of 109 studies were identified, but only 10 RCTs met the inclusion criteria and were included in our study . Among the excluded studies, 48 were excluded due to duplication. After reviewing the titles and abstracts, 40 studies were excluded. An additional 11 studies were excluded after a full-text examination. Specific reasons for exclusion can be found in . A total of 593 adults (male/female: 295/298) with ASA Grade I-II were included in this study. The participants consisted of adults undergoing various types of surgeries, including pelvic/abdominal elective surgery, septorhinoplasty, abdominal and orthopedic surgery, ear surgeries, and elective surgery of unspecified types. Among the 10 studies included, 5 studies involved gargling 50 mg of ketamine in 29 mL of normal saline for 30 s , 3 studies used 40 mg of ketamine dissolved in 30 mL of normal saline for 30 s , 1 study utilized gargling 50 mg of ketamine in 30 mL of normal saline for 30 s , and 1 study involved gargling 50 mg of ketamine in 29 mL of normal saline for 40 s . All included studies employed the 4-point scale to evaluate POST (Supporting in ). 3.2.1. Main Outcome: Incidence of Sore Throat at 24 h After Operation A total of 10 studies reported on the incidence of sore throat 24 h after surgery . The meta-analysis demonstrated that the ketamine gargle was associated with a significantly reduced occurrence of 24 h sore throat compared to the placebo groups (OR: 0.36; 95% CI: 0.25–0.51; p < 0.00001) . There was no significant heterogeneity observed between the studies ( I 2 = 0%; p =0.44). Following sensitivity analysis, our results remained statistically significant (OR: 0.36; 95% CI: 0.25–0.51) (Supporting in ). These findings indicate the robustness of our results. 3.2.2. Secondary Outcome: Subgroup Analysis of Different Time Points Within 24 h Three studies reported on the occurrence of sore throat immediately after surgery (0 h) . The forest plot demonstrated that the ketamine gargle was effective in reducing POST at 0 h compared to the placebo groups (OR: 0.14; 95% CI: 0.04–0.47; p =0.002) . However, there was heterogeneity observed among the studies and potential bias in the results, so cautious interpretation was necessary ( I 2 = 67%; p =0.05). Four studies included the outcome of sore throat at 2 h postoperatively . The meta-analysis indicated that ketamine gargling significantly reduced the incidence of sore throat at 2 h after surgery (OR: 0.30; 95% CI: 0.17–0.52; p < 0.0001) . Furthermore, there was no substantial heterogeneity among the studies, suggesting the reliability of our results ( I 2 = 31%; p =0.23). Six studies reported on sore throat at 4 h postoperatively , and the meta-analysis demonstrated that the ketamine gargle was associated with a decreased risk of sore throat at this time point compared to the placebo groups (OR: 0.32; 95% CI: 0.20–0.52; p < 0.00001) . No statistically significant heterogeneity was observed among the studies ( I 2 = 0%; p =0.65), indicating the reliability of our results. Four studies reported on sore throat at 8 h postoperatively . Our analysis revealed that ketamine gargle was also associated with a lower risk of sore throat at 8 h after surgery (OR: 0.40; 95% CI: 0.23–0.70; p =0.001) . There was no statistically significant heterogeneity observed among the studies ( I 2 = 29%; p =0.24). 3.2.3. Secondary Outcome: Anesthesia Time Four studies reported on anesthesia time . Our meta-analysis revealed that ketamine gargle did not result in a significant reduction in anesthesia time (min) (MD: −1.16; 95% CI: −6.44–4.11; p =0.67) (Supporting in ). Furthermore, there was no notable heterogeneity observed among the studies ( I 2 = 0%; p =0.89), indicating the reliability of the results. A total of 10 studies reported on the incidence of sore throat 24 h after surgery . The meta-analysis demonstrated that the ketamine gargle was associated with a significantly reduced occurrence of 24 h sore throat compared to the placebo groups (OR: 0.36; 95% CI: 0.25–0.51; p < 0.00001) . There was no significant heterogeneity observed between the studies ( I 2 = 0%; p =0.44). Following sensitivity analysis, our results remained statistically significant (OR: 0.36; 95% CI: 0.25–0.51) (Supporting in ). These findings indicate the robustness of our results. Three studies reported on the occurrence of sore throat immediately after surgery (0 h) . The forest plot demonstrated that the ketamine gargle was effective in reducing POST at 0 h compared to the placebo groups (OR: 0.14; 95% CI: 0.04–0.47; p =0.002) . However, there was heterogeneity observed among the studies and potential bias in the results, so cautious interpretation was necessary ( I 2 = 67%; p =0.05). Four studies included the outcome of sore throat at 2 h postoperatively . The meta-analysis indicated that ketamine gargling significantly reduced the incidence of sore throat at 2 h after surgery (OR: 0.30; 95% CI: 0.17–0.52; p < 0.0001) . Furthermore, there was no substantial heterogeneity among the studies, suggesting the reliability of our results ( I 2 = 31%; p =0.23). Six studies reported on sore throat at 4 h postoperatively , and the meta-analysis demonstrated that the ketamine gargle was associated with a decreased risk of sore throat at this time point compared to the placebo groups (OR: 0.32; 95% CI: 0.20–0.52; p < 0.00001) . No statistically significant heterogeneity was observed among the studies ( I 2 = 0%; p =0.65), indicating the reliability of our results. Four studies reported on sore throat at 8 h postoperatively . Our analysis revealed that ketamine gargle was also associated with a lower risk of sore throat at 8 h after surgery (OR: 0.40; 95% CI: 0.23–0.70; p =0.001) . There was no statistically significant heterogeneity observed among the studies ( I 2 = 29%; p =0.24). Four studies reported on anesthesia time . Our meta-analysis revealed that ketamine gargle did not result in a significant reduction in anesthesia time (min) (MD: −1.16; 95% CI: −6.44–4.11; p =0.67) (Supporting in ). Furthermore, there was no notable heterogeneity observed among the studies ( I 2 = 0%; p =0.89), indicating the reliability of the results. Overall, the differences in risk of bias among the studies were minimal. Three studies were classified as having a high risk of bias due to inadequate blinding, while one study had a high risk of bias due to incomplete data . There was a potential risk of bias in three randomized control trials. On the other hand, three randomized control studies were considered to have a low risk of bias and demonstrated good overall quality . The grade ratings for all outcomes are shown in . The funnel plot of the included studies in this meta-analysis demonstrated overall symmetry, indicating no evidence of publication bias . Meta-regression based on sample size showed no significant publication bias in this study ( p =0.86) (Supporting in ). TSA showed that this meta-analysis reached both the traditional and TSA boundaries and was able to definitely obtain significant results (Supporting in ). Our study demonstrated that a prophylactic ketamine gargle is effective in reducing the incidence of POST in surgical patients who require general anesthesia with tracheal intubation, when compared to placebo. This effect may be attributed to ketamine's ability to act as a blocker for various pain-related receptors. Ketamine can block NMDA receptors, as well as 2-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid and kainic acid receptors in peripheral nerve synapses and the spinal cord . The administration of NMDA receptor antagonists peripherally has been associated with the initiation of the anti-inflammatory cascade and antinociception . The 2-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid and kainic acid receptors mediate fast excitatory synaptic transmission in the central nervous system . A network meta-analysis conducted by Narinder P. Singh et al. revealed that the topical application of magnesium, followed by liquorice and corticosteroids, was the most effective in inhibiting postoperative throat bleeding 24 h after ETI, while ketamine did not show the same effectiveness . This finding contradicts our own research results. One possible explanation is that most of the ketamine studies included in this network meta-analysis were indirectly compared with other drugs, rather than being directly compared with a placebo. In addition, the network meta-analysis discussed the effect of ketamine on postoperative cough and hoarseness, but the results showed that ketamine did not reduce the incidence of these symptoms 24 h after surgery . This was also a limitation in our study, as only one or two of the ten RCTs included outcome measures such as postoperative cough or hoarseness. Furthermore, a systematic review conducted by Jillian Mayhood et al. demonstrated that ketamine gargling can reduce the incidence of sore throat at 0, 2, 4, 8, and 24 h following airway instrumentation . However, this systematic review only included five RCTs. Despite the consistency between the results of the systematic review and our research, our study is more reliable. Our meta-analysis had a larger sample size, including 10 RCTs with a total of 593 adult participants. In addition, our study encompassed a variety of surgical types, all of which involved ETI, whereas the systematic review by Jillian Mayhood et al. was limited to patients undergoing airway instrumentation. Ketamine has a short half-life in humans, usually 2–4 min, of which 80% is converted to norketamine by n-demethylation. The half-life of desloratadine is up to 2–4 h . In animal studies, norketamine exerted about one-third of the antiharm perception properties of ketamine . Norketamine may reduce POST within 4 h by reducing the patient's perception of harm. Alternatively, POST may be the result of local trauma leading to sterile mucosal inflammation . Kempe et al. showed oral dryness and inflammation due to mouth breathing in patients undergoing septal surgery . We hypothesized that the presence of sore throat at 24 h postoperatively reflected the slow development of local inflammation. Zhu et al. showed that nebulized ketamine attenuated many of the core components of inflammatory change . Reducing this inflammation by ketamine gargling may be the reason for the reduction of postoperative 24 h sore throat. Several limitations still exist in this meta-analysis. First, high-risk studies included have the potential to cause bias in the results. Second, the RCTs included in this meta-analysis mainly focused on pelvic or abdominal surgery or septoplasty . Third, only two of the studies we included used low-dose fentanyl for analgesia after surgery . Our meta-analysis demonstrated the efficacy of a prophylactic ketamine gargle in reducing the incidence of POST across all studied time intervals in patients who required tracheal intubation during general anesthesia, when compared to a placebo. In future research, it is necessary to explore the effect of ketamine gargle on POST in patients who require double-lumen ETI. In addition, further studies should investigate the molecular mechanism through which ketamine reduces POST. |
Neural circuit basis of placebo pain relief | 1f4468e0-4023-4d35-9a57-df2645c6f5c7 | 11358037 | Physiology[mh] | Pain is a subjective experience during which mind–body interactions exert a powerful influence both on pain perception and on the success of pain treatment , . One notable example is placebo analgesia, a contextual, cue-based learning phenomenon in which an individual’s positive expectation suffices to reduce pain perception and pain-related behaviours in the absence of any analgesic drug or procedure , , . Placebo analgesia has a prominent role in both medical practice and clinical trials . Expectations of pain relief are induced during cognitive behavioural therapy to promote recovery in patients with postoperative and/or chronic pain, while strong analgesic responses in the placebo groups of clinical trials hinder the development of pain treatments. Notwithstanding the importance of this placebo effect, our understanding of its underlying biological mechanisms remains limited to human brain imaging data showing that activity in some brain regions, such as the anterior cingulate cortex (ACC), correlates with placebo analgesia – . Here we combined an advanced mouse behavioural assay of expectation-based pain relief, targeted recombination in active populations (TRAP) of neurons mediating pain-relief expectation, neural Ca 2+ imaging in freely behaving mice, single-cell RNA sequencing (scRNA-seq), electrophysiological recordings and optogenetics to establish circuit, cellular and synaptic mechanisms through which positive expectations produce pain relief. We first developed a 7-day placebo analgesia conditioning (PAC) assay that generates a placebo-like anticipatory pain-relief expectation in mice and permits evaluation of the resulting analgesic effect (Fig. and ). The PAC apparatus consists of two chambers with distinct visual cues. The assay comprises three phases: habituation (days 1–3), conditioning (days 4–6) and post-conditioning analgesia testing (day 7; Fig. ). During the habituation phase, the floors of both chambers are set at 30 °C (innocuously warm) and the mice can freely explore both chambers. Mouse performance (for example, latency of border crossing, time spent in each chamber) on day 3 serves as the pre-conditioning baseline exploratory pattern. During the conditioning phase, the floor of chamber 1, on which the mouse begins the session, is set at 48 °C (noxiously hot), whereas the floor of chamber 2 remains at an innocuous 30 °C. This trains mice to expect pain relief when leaving chamber 1 and entering chamber 2. Finally, for the post-conditioning analgesia test (post-test), the floors of both chambers are set at 48 °C to evaluate any analgesic effect induced by the expectation of pain relief. Compared with the unconditioned control mice, conditioned mice progressively developed a significant preference for chamber 2 during the conditioning phase (days 4–6). Importantly, this preference persisted on the post-test day (Fig. ), despite that the floors of both chambers were set at the same temperature and should, in the absence of conditioning, elicit identical heat pain perception. Furthermore, conditioned mice exhibited increased latencies to revisit chamber 1 during both the conditioning phase and during the post-test (Fig. ). However, setting both chambers at 30 °C during the post-test diminished this preference (Extended Data Fig. ). Together, these results suggest that PAC generates an expectation of pain relief from chamber 2. To determine whether this PAC-induced expectation of pain relief indeed recapitulates key features of human placebo analgesia, we compared the nocifensive behaviours displayed by control and conditioned mice during the post-test (Fig. ). PAC significantly prolonged the latency for paw licking, rearing and jumping (Fig. ). Moreover, mice subjected to PAC exhibited fewer overall nocifensive behaviours during the post-test (Fig. ). This PAC-induced analgesia persisted for at least a week after the conditioning phase (Extended Data Fig. ). Importantly, administration of the opioid receptor antagonist naloxone during the post-test, but not during the conditioning phase, abolished this analgesic effect (Extended Data Fig. ), consistent with the known involvement of endogenous opioid signalling in human placebo analgesia , . Furthermore, PAC-conditioned mice exhibited reduced sensitivity to chemical pain induced by formalin injection during a modified post-test in which both floors were set to 30 °C (Extended Data Fig. ). However, outside of the PAC apparatus, the mechanical, thermal and chemical pain sensitivities of PAC-conditioned mice on day 7 were comparable to those of unconditioned mice (Extended Data Fig. ). Finally, when we confined PAC-conditioned mice to either chamber 1 or chamber 2 during the post-test by blocking the opening between the two chambers, mice displayed similar latencies to initiate nocifensive behaviours regardless of the chamber in which they were confined (Extended Data Fig. ). Notably, consistent with human studies suggesting sex differences in placebo analgesia , female mice showed behaviours similar to those of male mice during PAC (Extended Data Fig. ), but with different variability for licking ( P = 0.017) and jumping ( P < 0.001) nocifensive behaviours. Taken together, these results show that PAC produces an expectation-based analgesic effect that shares key features of human placebo analgesia, enabling modelling and investigation of placebo analgesia in rodents. Human brain imaging studies suggest that the ACC, especially the rACC, contributes to placebo analgesia – , . Notably, the ACC contains a wide variety of cell types, including projection neurons such as intratelencephalic (IT) and pyramidal tract (PT) pyramidal neurons located in distinct cortical layers and that have diverse intracortical and subcortical connections . To identify rACC pathways that might contribute to placebo analgesia, we injected into the rACC of TRAP2 ( Fos CreERT2 ) mice an adeno-associated virus (AAV) that permits expression of synaptophysin–mRuby in a Cre-dependent manner; this approach enabled us to label the presynaptic terminals of rACC neurons that were active during PAC (Fig. ). This procedure revealed dense axonal projections from labelled layer 5 (L5) rACC neurons to three brain areas: the striatum (dorsal caudate nucleus and putamen), thalamic/subthalamic nuclei (ventral posteromedial thalamic nucleus, mediodorsal thalamic nucleus, zona incerta) and, notably, the Pn (Fig. ), a region of the pons that mediates cortico-cerebellar communication. The contributions of striatal and thalamic circuits to various sensory-discriminative and affective-motivational aspects of pain have been described previously – . However, the Pn has no established role in pain modulation, although previous studies have reported Pn activation during pain , . Notably, while relatively understudied in the pain field, the cerebellum, like the rACC, consistently shows increased activity during placebo analgesia , , , . Furthermore, patients who have experienced a cerebellar infarction exhibit impaired placebo analgesia . We therefore next investigated the function of the rACC→Pn pathway in placebo analgesia. To record the neural dynamics of rACC→Pn neurons in real time during placebo analgesia, we used a head-mounted miniature microscope to image single-cell somatic Ca 2+ activity during the PAC assay. Viral anterograde and retrograde tracing, and whole-cell recording from Pn neurons during optogenetic stimulation of rACC neuron terminals confirmed monosynaptic glutamatergic connectivity between the rACC and Pn (Extended Data Fig. ). To express GCaMP7f selectively in rACC→Pn cells, we injected a Cre-encoding rAAV with retrograde transport properties into the Pn and injected a Cre-dependent GCaMP7f-encoding rAAV into the rACC (Fig. ). The numbers of cells that we detected from each mouse varied across different phases of PAC (Extended Data Fig. ). By aligning cell maps from day 3 (before conditioning), day 6 (conditioning) and day 7 (after conditioning), we aligned a total of 205 cells across days from 6 mice (34 ± 7 cells per mouse). Notably, intracranial virus injection, GRIN lens implantation and miniature microscope mounting had no significant effect on the measured performance metrics of mice during PAC, including total walking distance and average movement speed (Extended Data Fig. ). We next examined Ca 2+ signals in these neurons during border crossing, a timepoint around which mice should expect pain relief as a conditioned response to PAC training. We found that the Ca 2+ activity of rACC→Pn neurons increased progressively during the conditioning phase (Extended Data Fig. ). On the post-test day, these neurons showed elevated Ca 2+ activity, at the levels of individual neurons (Fig. ) and of individual mice (Fig. ). Among the cross-day-aligned rACC→Pn neurons, 58% exhibited greater activity during the post-test compared with the pre-conditioning baseline, while 25% showed progressively increased activity throughout all phases (Extended Data Fig. ). Furthermore, the discriminability index of rACC→Pn neurons between the first border crossing (with conditioned pain-relief expectation) and the first crossing back (without a conditioned expectation of pain relief) also increased after conditioning (Extended Data Fig. ). This higher discriminability index during the post-test suggests that the increased activity of rACC→Pn neurons is not due to an overall increase in neural activity after conditioning. To exclude the possibility that the biophysical properties of the Ca 2+ indicator, especially its long decay dynamics, might explain the observed differences before and after PAC conditioning, we performed the same analysis using binary Ca 2+ transient event data, which yielded similar results (Extended Data Fig. ). These increases in Ca 2+ activity and discriminability index disappeared when we tested a shuffled control dataset with randomized crossing times (Extended Data Fig. ). Furthermore, we found no correlation between the activity of rACC→Pn neurons and mouse locomotor speed (Extended Data Fig. ), indicating that these cells are not merely responding to generic movement. To further confirm that increased activity in rACC→Pn neurons corresponds with pain-relief expectation and not pain-associated aversion, we compared, on the post-test day, the activity of rACC→Pn neurons during the first border crossing (with conditioned pain-relief expectation), the first crossing back (without conditioned pain-relief expectation) and the last border crossing (reduced or no conditioned pain-relief expectation due to expectation violation ). rACC→Pn neurons showed no increased activity during the first crossing back and significantly reduced activity during the last border crossing, averaged across individual neurons (Fig. ) and across mice (Fig. ). Moreover, the elevated activity of rACC→Pn neurons during the post-test progressively decreased after arriving in chamber 2 (Extended Data Fig. ), aligning well with the violation of expectation. Furthermore, we examined the relationship between the latency of the first border crossing and the activity of rACC→Pn neurons during that crossing for each mouse. We reasoned that, if rACC→Pn neurons encode the expectation of pain relief, then mice with the strongest expectation-based motivation to cross the border should show the greatest increase in rACC→Pn neuron activity. Consistent with this prediction, linear regression analysis revealed a negative correlation between the latency of the first border crossing and rACC→Pn neural activity (Fig. ), further evincing the involvement of the rACC→Pn pathway in pain-relief expectation. To clarify whether the increase in activity after conditioning was specific to rACC→Pn neurons, or rather a general feature of all rACC output neurons, we recorded the Ca 2+ dynamics of IT neurons, the other major type of deep-layer pyramidal neurons (rACC→Pn neurons are PT neurons), during PAC. IT neurons showed no significant change in Ca 2+ activity during the first border crossing after conditioning (Extended Data Fig. ), suggesting a specific role for rACC→Pn neurons during pain-relief expectation. Finally, in a separate experiment, we examined rACC→Pn neural activity during noxious thermal, noxious mechanical or innocuous mechanical stimulation (Extended Data Fig. ). None of these stimuli significantly affected rACC→Pn neural Ca 2+ activity, in agreement with the lack of significant change in rACC→Pn neural activity during licking and rearing nocifensive behaviours on the post-test day (Extended Data Fig. ), arguing against a general role for these cells in nociception or mechanosensation. Consistent with the learning-based nature of placebo analgesia , , the Ca 2+ activity of rACC→Pn neurons increased progressively during PAC (Fig. and Extended Data Fig. ). To examine the underlying synaptic mechanisms, we used brain-slice electrophysiology. First, we labelled rACC→Pn neurons by injecting a tdTomato-encoding AAV with retrograde transport properties into the Pn (Fig. and Extended Data Fig. ). Mice were then subjected to PAC and euthanized immediately after the conditioning phase for electrophysiology recording (Extended Data Fig. ). Passive membrane properties of rACC→Pn neurons, such as the resting membrane potential, input resistance, amplitude and half-duration of the action potential, and action-potential firing frequency, remained unchanged after PAC (Extended Data Fig. ). However, rACC→Pn neurons from conditioned mice (Extended Data Fig. ), but not other L5 rACC neurons (Extended Data Fig. ), displayed more burst firing at the beginning of current injection than rACC→Pn neurons from control mice. Moreover, PAC significantly increased the amplitude, but not the frequency, of spontaneous excitatory postsynaptic currents (EPSCs; Extended Data Fig. ), suggesting a postsynaptic change in rACC→Pn neuron function. Consistent with this, the paired-pulse ratio (PPR), a measure of presynaptic function, was statistically indistinguishable between conditioned and control mice (Extended Data Fig. ), whereas the AMPAR/NMDAR ratio, a postsynaptic characteristic of synaptic transmission, increased significantly (Fig. ). We next tested whether PAC alters long-term potentiation (LTP), a cellular process that underlies learning and memory , in rACC→Pn neurons. We induced LTP using classical theta-burst stimulation (TBS). In rACC→Pn neurons from control mice, the amplitude of EPSCs increased after TBS, then quickly returned to the baseline levels. By contrast, rACC→Pn neurons from conditioned mice showed robust LTP that lasted for the entire recording period (40 min) after induction (Fig. ), indicating enhanced synaptic plasticity. Cortical inhibitory interneurons control Ca 2+ dynamics, burst firing, spontaneous release and synaptic plasticity of principal neurons through feedforward inhibition, facilitating learning . To test whether PAC alters feedforward inhibition, we recorded evoked EPSCs or IPSCs (inhibitory postsynaptic currents) in isolation by holding the membrane potential of rACC→Pn neurons at −70 mV or +10 mV, respectively (Fig. ). In brain slices from control mice, monosynaptic EPSCs were followed by large disynaptic IPSCs, confirming strong feedforward inhibition in this circuit (Extended Data Fig. ). By contrast, rACC→Pn neurons from conditioned mice received significantly weaker feedforward inhibition (Fig. ). Furthermore, the delays between EPSCs and IPSCs were markedly prolonged after conditioning (Fig. ). Similarly, PAC decreased the amplitude and delayed the latency of IPSCs specifically from parvalbumin-positive (PV + ) interneurons (Extended Data Fig. ), which critically contribute to feedforward inhibition in the cortex , . Taken together, these results demonstrate that PAC impairs both the efficacy and timing of feedforward inhibition of rACC→Pn neurons and enhances their excitability. To test the function of the rACC→Pn pathway in placebo analgesia, we injected AAVs to express halorhodopsin (NpHR) or channelrhodopsin-2 (ChR2) in rACC neurons and implanted optic fibres bilaterally over the Pn (Fig. and Extended Data Fig. ). We then photomanipulated rACC→Pn neuron terminals of conditioned mice during the PAC post-test, beginning when mice crossed from chamber 1 to chamber 2 (Extended Data Fig. ). We found that photoinhibition substantially reduced PAC-induced latency increases in paw licking, rearing and jumping (Fig. ). Conversely, optogenetically activating the rACC→Pn pathway during the post-test significantly prolonged the latency of mice to display paw licking, but not rearing and jumping behaviours (Extended Data Fig. ). These less-pronounced behavioural changes may indicate a ceiling effect, given that Pn neurons, especially their axon terminals, show high instantaneous firing frequencies (>700 Hz) while coding sensory information . Neither photoinhibition nor photoexcitation produced detectable changes in motor coordination (Extended Data Fig. ). These results indicate that the rACC→Pn pathway mediates PAC-induced analgesia. To test whether modulating the activity of the rACC→Pn pathway could alter pain, we subjected naive mice to commonly used thermal (hot plate) and mechanical (von Frey) sensitivity tests while optogenetically manipulating the rACC→Pn pathway (Fig. ). In the hotplate test, photoinhibition of the rACC→Pn pathway decreased paw withdrawal latency, while photoexcitation increased this latency compared with control mice (Fig. ). In the von Frey test, photoinhibition of the rACC→Pn pathway increased the paw withdrawal frequency, whereas photoexcitation decreased the paw withdrawal frequency compared with the control mice (Fig. and Extended Data Fig. ). Furthermore, photoinhibition decreased the mechanical sensitivity threshold, whereas photoexcitation increased it (Fig. ). Together, these results indicate that the rACC→Pn pathway can be activated to generate analgesia. Oprd1 + Pn neurons We next sought to manipulate the function of Pn neurons during PAC. However, our understanding of the molecular identity of Pn neurons is limited. We therefore used single-cell transcriptomics to investigate the cellular composition of the Pn and identify marker genes to gain genetic access to and manipulate Pn neurons. We used both high-throughput/low-depth (10x Genomics) and low-throughput/high-depth (SMART-seq) scRNA-seq approaches to comprehensively characterize Pn neurons (Fig. and Extended Data Fig. ). Focusing our analysis on neurons, we detected ten transcriptionally distinct clusters (Fig. ). Five clusters were Slc17a7 + (encoding vesicular glutamate transporter 1) excitatory neurons, comprising 72% of all Pn neurons. The remaining clusters were largely Slc32a1 + (encoding vesicular inhibitory amino acid transporter) inhibitory neurons (Fig. and Extended Data Fig. ). More than half of the excitatory Slc17a7 + Pn neurons coexpressed Slc17a6 (encoding vesicular glutamate transporter 2)—a rare feature for glutamatergic neurons throughout the nervous system (Fig. and Extended Data Fig. ). After examining the expression of endogenous opioid peptides and receptors, which critically contribute to pain modulation and placebo analgesia, we determined that a very large proportion of Pn neurons expresses the δ- and/or μ-opioid receptors (Fig. ). Specifically, 54% and 26% of Pn neurons express Oprd1 and Oprm1 (encoding the δ- and μ-opioid receptor, respectively). In total, 81% of Oprd1 + Pn neurons coexpress excitatory neuron markers ( Slc17a7 , Slc17a6 or both; Extended Data Fig. ). We confirmed the presence of Slc17a6 + Oprd1 + neurons in the Pn using fluorescence in situ hybridization (Fig. ). On the basis of these observations, we used an Oprd1 cre mouse line to investigate the anatomy of Oprd1 + Pn neurons and their function in placebo analgesia. To investigate whether Oprd1 + Pn neurons have a role in placebo analgesia and/or pain modulation, we first tested for a direct connection between the Oprd1 + Pn neurons and rACC projection neurons. Both AAV1-mediated anterograde transsynaptic tracing in WT mice and rabies-mediated retrograde transsynaptic tracing in Oprd1 cre mice (Extended Data Fig. ) indicated a monosynaptic connection between rACC neurons and Oprd1 + Pn neurons. Moreover, anterograde transsynaptic tagging suggested that 53% of neurons in rACC-targeted subregions of the Pn receive monosynaptic inputs from the rACC (Extended Data Fig. ). Given that photoinhibition of rACC→Pn neuron terminals abolished placebo analgesia (Fig. ), we next tested whether postsynaptic manipulation of Oprd1 + Pn neurons could produce similar effects. We injected AAVs to express NpHR or eYFP (control) in Oprd1 + Pn neurons and implanted optic fibres bilaterally over the Pn (Fig. ). Photoinhibition of Oprd1 + Pn neurons during the post-test abolished the PAC-induced prolonged latency of mice to display first paw licking, rearing and jumping (Fig. ). A more-specific strategy targeting only the Oprd1 + Pn neurons receiving rACC inputs yielded similar results (Extended Data Fig. ). Consistent with these findings, systemic administration of selective agonists for either the µ- or δ-opioid receptor also diminished the analgesic effects induced by PAC (Extended Data Fig. ). Moreover, photoinhibition of Oprd1 + Pn neurons significantly increased mechanical and thermal sensitivity in the von Frey and hotplate tests, respectively (Extended Data Fig. ). Photoinhibition of Oprd1 + Pn neurons produced no detectable change in locomotion or motor coordination (Extended Data Fig. ). Lastly, photoinhibition of Oprd1 + Pn neurons during the conditioning phase of PAC showed a trend toward attenuating PAC-induced analgesia on the post-test day (Extended Data Fig. ). Together, these results indicate that L5 rACC neurons projecting onto Oprd1 + Pn neurons critically contribute to both placebo analgesia and pain processing. To gain further evidence that the rACC→Pn pathway mediates pain-relief expectation, we examined the primary target of Pn neurons—the cerebellum. To label the projections of Oprd1 + Pn neurons in the cerebellum, we injected into the Pn of Oprd1 cre mice an AAV encoding mGFP and synaptophysin–mRuby in a Cre-recombinase-dependent manner. The resulting tracing data showed that Oprd1 + Pn neurons mainly project to cerebellar lobules VI, Crus I and Crus II (Extended Data Fig. ), which support the cognitive functions of the cerebellum . We then used a head-mounted miniature microscope to image the dendritic Ca 2+ activity of Purkinje cells , the principal neurons of the cerebellar cortex, in lobule VI of the cerebellar vermis in freely behaving mice during PAC (Fig. ). Cerebellar Purkinje cells receive excitatory input from a single climbing fibre (CF) originating in the inferior olive and from around 200,000 parallel fibres (PFs) that relay information sent disynaptically from the Pn through cerebellar granule cells. Spontaneous CF activity (1–2 Hz) triggers dendritic Ca 2+ spikes that can pervade the entire dendritic tree, whereas PF inputs, depending on their activity level, can evoke smaller to moderate dendritic spikes – . Furthermore, when near coincident in time with CF input, PF activity can lead to supralinear Ca 2+ excitation , . Thus, if the Pn relays pain-relief expectation from the rACC to the cerebellum through PFs, then PAC should increase the amplitudes of Purkinje cell dendritic Ca 2+ spikes and the occurrence frequency of dendritic Ca 2+ spikes that are large enough to be detected by Ca 2+ imaging. To test these predictions, we analysed Ca 2+ imaging recordings from 276 cross-day-aligned Purkinje cells (Fig. ) during PAC. To identify Purkinje cells that might encode pain-relief expectation, we performed a classification analysis (Extended Data Fig. ). On the basis of Ca 2+ activity during the first border crossing (with conditioned pain-relief expectation), the first crossing back (without conditioned pain-relief expectation) and the last border crossing (reduced or no conditioned pain-relief expectation due to expectation violation), the classification algorithm resolved two main classes of Purkinje cells (clusters 1 and 2). Notably, during first crossing back and last border crossing, the average Ca 2+ activity of Purkinje cells in cluster 1 declined substantially, whereas the activity of Purkinje cells in cluster 2 increased modestly but significantly (Extended Data Fig. ). Furthermore, the Ca 2+ activity of Purkinje cells in cluster 1, but not cluster 2 (Extended Data Fig. ), progressively increased across the before, during and after conditioning phases, as seen in statistical analyses of individual neurons (Fig. ) and individual mice (Fig. ). To disentangle the distinct contributions of PF and CF inputs to the elevated Ca 2+ activity of Purkinje cells in cluster 1 during PAC, we examined the amplitudes and occurrence frequency of Purkinje cell dendritic Ca 2+ spikes. Ca 2+ spikes in Purkinje cells of cluster 1 occurred at 1.4 Hz during border crossing on the pre-test day, consistent with the spontaneous firing rate of CFs , . However, Ca 2+ spiking increased to 2.6 Hz on the last day of the conditioning phase and to 2.5 Hz on the post-test day (Extended Data Fig. ). By contrast, the Ca 2+ spiking of Purkinje cells in cluster 2 was not significantly altered during PAC (Extended Data Fig. ). Moreover, Purkinje cells of cluster 1 (Extended Data Fig. ), but not cluster 2 (Extended Data Fig. ), displayed more Ca 2+ spikes with large amplitudes after PAC. Furthermore, when considering only Ca 2+ spikes with amplitudes exceeding 3 z -scored Δ F/F , suggestive of supralinear Ca 2+ signals, both the number and amplitude of these spikes were increased for Purkinje cells in cluster 1 (Extended Data Fig. ), but not in cluster 2 (Extended Data Fig. ). Moreover, linear regression analysis showed that, across days of PAC, increased net activity levels of Purkinje cells in cluster 1, but not in cluster 2, were associated with shorter latencies for the first border crossing (Extended Data Fig. ). Both Purkinje cells in cluster 1 and the entire set of recorded Purkinje cells displayed elevated Ca 2+ activity during first border crossing after conditioning, but not during the first crossing back or the last border crossing during the post-test (Extended Data Fig. ). This effect was absent in a shuffled dataset with randomized crossing times (Extended Data Fig. ). Together, these findings support the idea that elevated PF inputs during the PAC assay drive the increased Ca 2+ activity of Purkinje cells in cluster 1 and thereby promote behavioural changes. This similar response of cerebellar Purkinje cells to that of rACC→Pn neurons during PAC directly demonstrates the cerebellum’s involvement in pain-relief expectation and the critical role of the rACC–ponto–cerebellar pathway in placebo analgesia. Pain is a complex experience with sensory-discriminative, affective-motivational and cognitive-evaluative dimensions . Placebo analgesia epitomizes the cognitive-evaluative dimension of pain by demonstrating how cognitive factors such as expectations alter the perception of noxious events or injuries to shape pain subjectivity. To investigate the neural basis of placebo analgesia in rodents, we developed the PAC assay. This paradigm effectively establishes an expectation of pain relief in mice, resulting in reduced pain (Fig. ). Although PAC-induced analgesia may not capture all of the complexities of human placebo analgesia, this assay replicates several key features of the phenomenon, including reliance on the endogenous opioid system (Extended Data Figs. and ), the recall of pain-relief expectations over time (Extended Data Fig. ) and the inherent variability observed in human placebo analgesia (Fig. and Extended Data Fig. ). Thus, building on previous approaches to model placebo analgesia in rodents , PAC offers a simple, practical and non-pharmacological method to investigate the biology of expectation-induced analgesia at several levels. At the circuit level, our data suggest that the rACC mediates placebo analgesia by engaging the cerebellum through L5 PT projections to the Pn. Among all Pn neurons, we found that 65% express Oprd1 and/or Oprm1 (Fig. ), suggesting that the activity of Pn neurons is probably modulated by the endogenous opioid system. Notably, naloxone injections during the conditioning phase prevent the attenuation of PAC-induced analgesia caused by three consecutive days of injections (Extended Data Fig. ), and optogenetically inhibiting Oprd1 + Pn neurons during the same phase reduces PAC-induced analgesia (Extended Data Fig. ). Although these observations suggest that associative learning requires the activation of Oprd1 + Pn neurons, aligning with the Ca 2+ imaging data, the contribution of opioid signalling in this process needs to be further confirmed. Similarly, although our data suggest the involvement of both the µ- and δ-opioid receptors in PAC-induced placebo analgesia (Extended Data Fig. ), the specific opioid peptides and receptors within the rACC-Pn-cerebellar pathway, and potentially in other regions, that collectively mediate this phenomenon remain to be established. Moreover, by imaging the dendritic activity of cerebellar Purkinje cells, the sole outputs of the cerebellar cortex, during pain-relief expectation (Fig. and Extended Data Fig. ), our study provides direct, cellular-level evidence for the cerebellum’s contribution to placebo analgesia. The mechanisms by which the cerebellum modulates placebo analgesia remain to be explored. Based on its connectivity with numerous brain regions – , the cerebellum could modulate pain perception through ascending and/or descending pathways. For example, the rACC may recruit the cerebellum to indirectly modulate the descending pain modulation pathway, especially the periaqueductal grey , , to produce analgesia; this could explain the enhanced coactivation between the rACC and periaqueductal grey during placebo analgesia reported in several human studies , . At the cellular level, our Ca 2+ imaging and electrophysiological data show that PAC specifically increases the activity of Pn-projecting L5 rACC neurons (Extended Data Fig. ). While previous studies suggest that PT neurons targeting specific subcortical areas tend to be homogeneous, both genetically and in terms of morphology and function , we cannot exclude the possibility that only a subset of rACC→Pn neurons drives placebo analgesia. Notably, PT (rACC→Pn) and IT neurons display opposite activity during PAC: our electrophysiological data show reduced activity in non-PT L5 rACC neurons after conditioning (Extended Data Fig. ), and our Ca 2+ imaging data indicate that IT neurons are more active, whereas PT neurons less active, during the last border crossing in the PAC post-test (Extended Data Fig. ). The opposite activity of rACC PT and IT neurons during placebo analgesia is an interesting subject for future study. At the synaptic level, our electrophysiology data show that PAC increases the synaptic plasticity and impairs the feedforward inhibition of rACC→Pn neurons (Fig. ). Feedforward inhibition contributes to burst firing, shapes network representations of behavioural events and modulates ensemble Ca 2+ signalling during learning , . Thus, the diminished feedforward inhibition of rACC→Pn neurons probably underlies their increased burst firing after conditioning and progressively enhanced activity during PAC (Fig. and Extended Data Fig. ). Notably, in the cerebellar cortex, feedforward inhibition has also been shown to gate supralinear Ca 2+ signalling in Purkinje cell dendrites . Given that Purkinje cells display an increased number of supralinear Ca 2+ spikes after PAC (Extended Data Fig. ), suppressed feedforward inhibition may serve as a common synaptic mechanism of pain-relief expectation in both cerebral and cerebellar cortices. In conclusion, this study reveals circuit, cellular and synaptic mechanisms that underlie placebo analgesia and, more broadly, the cognitive-evaluative dimension of pain, bridging the gap with our more advanced understanding of the sensory-discriminative and affective-motivational dimensions of pain , – . Crucially, we provide evidence that this rACC–ponto–cerebellar pathway could be engaged by analgesic drugs, neurostimulation protocols and/or cognitive behavioural therapies to produce pain relief. Animals All of the procedures were performed according to animal care guidelines approved by the Administrative Panel on Laboratory Animal Care (APLAC) of Stanford University, by the Institutional Animal Care and Use Committee (IACUC) of the University of North Carolina at Chapel Hill and by the International Association for the Study of Pain. Mice were housed at a maximum of 5 mice per cage and maintained under a 12 h–12 h light–dark cycle in a temperature-controlled environment with ad libitum access to food and water. Male or female mice with an age range of 8–12 weeks were used for the experiments. C57BL/6 wild-type (000664), TRAP2 ( Fos CreERT2 , 030323) and Pvalb cre (017320) mice were purchased from Jackson Laboratory. Oprd1 cre mice were generated at the Stanford Transgenic Research Center using standard gene targeting procedures. In brief, an IRES-cre cassette was introduced immediately following the Oprd1 stop codon through homologous recombination. After electroporation of the targeting construct into 129Sv/SvJ-derived ES cells, neomycin-resistant ES cell colonies were screened for IRES-cre insertion using long-range PCR and Southern blotting. Flp transfection was then performed to remove the FRT-flanked neomycin-resistance gene. Confirmed neomycin-excised colonies were then injected into C57BL/6 blastocysts to generate chimeric males, which were bred to C57BL/6 females to generate founders. Sample sizes for mouse behaviour and electrophysiology experiments were determined using the power analysis (‘pwr’ R package). Specifically, the function ‘pwr.t.test’ was used, with a significance level of 0.05, a power of 0.80, and effect sizes estimated from pilot experiments and/or previous studies using similar methods. For the histology experiment, samples were allocated by randomly selecting brain slices containing regions of interest. For all other experiments, samples and animals were randomized into groups. Experimenters were blinded to experimental groups before and during all mouse behavior experiments. Calcium imaging data were analyzed by two independent researchers using two different analysis pipelines at two universities. Drugs 4-OHT (H6278, Sigma-Aldrich) was prepared in absolute ethanol and Kolliphor EL (C5135, Sigma-Aldrich) and administered intraperitoneally (50 mg per kg). Naloxone (N7758, Sigma-Aldrich) was prepared in saline and administered intraperitoneally (5 mg per kg). Viral reagents To express GCaMP7f in rACC→Pn projection neurons for Ca 2+ imaging, we intracranially injected 400 nl of rAAV2-retro-hSyn-Cre-WPRE-hGH (105553-AAVrg; Addgene; titre: 7 × 10 12 viral genomes (vg) per ml) into the right Pn at the coordinates anteroposterior (AP): −4.0 mm, mediolateral (ML): +0.4 mm, dorsoventral (DV): −5.4/−5.8 mm and 400 nl of AAV1-syn-FLEX-jGCaMP7f-WPRE (104492-AAV1; Addgene; titre: 1 × 10 13 vg per ml) into the right ACC (AP: +0.75 mm, ML: +0.5 mm, DV: −1.75 mm). To express GCaMP6s in ACC IT neurons, we injected 400 nl of AAVretro-EF1a-Flpo (55637-AAVrg; Addgene; titre: 2.2 × 10 13 vg per ml) into the left dorsomedial striatum at the coordinates AP: +0.2 mm, ML: −2.0 mm, DV: −4.1 mm and 400 nl of AAV8-EF1a-fDIO-GCaMP6s (105714-AAV8; Addgene; titre: 1.8 × 10 13 vg per ml) into the right ACC. To express GCaMP8m in cerebellar Purkinje cells for Ca 2+ imaging, we intracranially injected 200 nl of AAV1-CamKIIa-JGCaMP8m (176751-AAV1; Addgene; titre: 1.8 × 10 13 vg per ml) into lobule VI of the vermis at the coordinates AP: −7.0 mm, ML: +0.0 mm, DV: −360 and −200 µm. To trace the output of rACC neurons active during PAC, we intracranially injected 400 nl of AAV-DJ-hSyn-FLEX-mGFP-2A-Synaptophysin-mRuby (Stanford Virus Core; titre: 1.2 × 10 13 vg per ml) into the rACC of TRAP2 ( Fos CreERT2 ) mice at the coordinates AP: +0.75 mm, ML: +0.5 mm, DV: −1.75 mm. To optogenetically manipulate the activity of the rACC→Pn pathway, we intracranially injected 400 nl of either AAV5-hSyn-eNpHR-eYFP (UNC Vector Core; titre: 4.4 × 10 12 vg per ml) or AAV5-hSyn-ChR2-eYFP (UNC Vector Core; titre: 5.3 × 10 12 vg per ml) into the ACC of WT mice, and 400 nl of AAV5-EF1a-DIO-eNpHR-eYFP (UNC Vector Core; titre: 4.5 × 10 12 vg per ml) into the Pn of Oprd1 cre mice. To optogenetically manipulate the activity of the Oprd1 + Pn neurons receiving monosynaptic inputs from rACC neurons, we intracranially injected in Oprd1 cre mice 400 nl of AAV1-EF1a-Flpo (55637-AAV; Addgene; titre: 2.3 × 10 13 ) into the rACC bilaterally, and then 400 nl AAV8-nEF-Con/Fon-NpHR-EYFP (137152-AAV8; Addgene; titre: 2.6 × 10 13 ) into the Pn bilaterally. To trace the output of ACC and secondary motor cortex projections to the Pn, we intracranially injected 400 nl of AAV5-hSyn-eGFP (UNC Vector Core; titre: 4 × 10 12 vg per ml) into the ACC (AP: +0.75 mm, ML: +0.5 mm, DV: −1.75 mm) and 400 nl of rAAV5-hsyn-chrimsonr-tdT (UNC Vector Core; titre: 4.6 × 10 12 vg per ml) into the secondary motor cortex (AP: +1.41 mm, ML: +0.75 mm, DV: −1.5 mm). To label rACC→Pn neurons for electrophysiological recording, we intracranially injected 400 nl of AAVretro-CAG-tdTomato (59462-AAVrg; titre: 1.2 × 10 12 vg per ml) into the right Pn at the coordinates listed above. To measure the feedforward inhibition of rACC→Pn neurons by PV + interneurons, in addition to the AAVretro-CAG-tdTomato virus injected into the Pn, we injected 400 nl of AAV5-DIO-EF1a-ChR2-eYFP (UNC Vector Core; titre: 4.0 × 10 12 vg per ml) into the rACC to express ChR2 in PV + interneurons. To trace the output of Pn neurons that receive inputs from the ACC, we injected 300 nl of the anterograde transsynaptic virus AAV1-hSyn-cre-WPRE-hGH (Addgene; titre: 1 × 10 13 vg per ml) into the ACC , then injected 400 nl of AAV-DJ-hSyn-FLEX-mGFP-2A-Synaptophysin-mRuby (Stanford Virus Core; titre: 1.2 × 10 13 vg per ml) into the Pn. The coordinates for the ACC and Pn injections were the same as for the Ca 2+ imaging experiment. To trace brain areas forming monosynaptic connections with Oprd1 + Pn neurons, we injected the AAV helper viruses AAV-FLEx-TVA-Mkate and AAV-FLEx-G into the Pn of Oprd1 cre mice. Then, 3 weeks later, a recombinant rabies virus (RVdG) was injected into the Pn. Next, 1 week after injection of the rabies virus, the brains were collected. The labelling efficiency of viruses is governed by a multifaceted interplay of factors, including the virus’s serotype and titre, the selected promoter, and the targeted cell types. Thus, the labelling efficiency of the virus used in this study cannot be consistently quantified. Stereotaxic injection and surgical procedures All surgeries were performed under aseptic conditions. Animals were anaesthetized using isoflurane (07-893-8441, Patterson Veterinary). After anaesthesia induction with 4% isoflurane in a chamber, animals were transferred to a small-animal digital stereotaxic instrument (David Kopf Instruments) and anaesthesia was maintained using 2% isoflurane. Injections were performed using a calibrated microcapillary tube (P0549, Sigma-Aldrich) and pulled with a P-97 micropipette puller (Sutter Instruments). The viral reagents were aspirated into the tube using negative pressure and delivered at a rate of around 50 nl min −1 using positive pressure. After injection, the tube was raised ~100 µm and held stationary for an additional 10 min to allow diffusion of the virus, and then slowly withdrawn at a rate of 0.05 mm s −1 . After surgery, the mice were transferred to a warm chamber until they had fully recovered, and were then returned to their home cage. Microendoscope or optical cannula implantation Microendoscope implantation in the ACC was performed as described previously , . In brief, we stereotaxically implanted a stainless steel cannula 1 week after viral injection. The cannula was fabricated with 18-G 304 S/S Hypodermic Tubing, custom cut to pieces 4.3 mm in length (Ziggy’s Tubes and Wires) and attached at one end to a Schott Glass 2 mm in diameter and 0.1 mm thick (TTL) using an optical adhesive (Norland Optical Adhesive #81, Thermo Fisher Scientific). After grinding away the excess glass using a polisher, the cannulas were carefully stored until use in implantation surgeries. For cannula implantation surgeries, mice were anaesthetized with isoflurane (4% for induction and 2% for maintenance) while the body temperature was maintained using a heating pad. After cranial hair removal, skin sterilization and scalp incision, we performed small craniotomies in three locations (AP: +5.10 mm, −3.56 mm, −3.56 mm; ML: −0.77mm, +2.06 mm, −3.01 mm) and then screwed three stainless steel screws (MX-000120-01SF, Component Supply Company) down to the dura of the skull to stabilize the implantation. We then performed a fourth craniotomy using a drill (Model EXL-M40, Osada) and a 1.4 mm round drill burr (19007-14, FST) at the coordinates AP: +0.75 mm, ML: +1.25 mm. The bone fragments and other detritus were cleared away from the opening using sterilized forceps. To prevent any increase in intracranial pressure and improve image quality, we aspirated away the overlying tissue down to approximately DV: −1.0 mm at an angle of 18°. The custom cannula was then attached to a holder (David Kopf Instruments) and lowered to AP: +0.75 mm, ML: +1.25 mm, DV: −1.8 mm at 18°. Blood and additional debris around the craniotomy were quickly removed and adhesive cement (S380 Metabond Quick Adhesive Cement System, C&B) was applied to seal the gap between the cannula and the skull. A custom-designed laser-cut headbar (18–24 G stainless steel, LaserAlliance) was placed over the left posterior skull screw, then layers of dental cement (Lang Dental) were applied to affix both the cannula and headbar to the skull. After the cement dried (7–10 min), we transferred the animal to a heated pad for recovery. After full recovery, the mice were returned to their home cage. Placement of the cranial window After cranial hair removal, skin sterilization and scalp incision, we performed small craniotomies in two locations (AP: +5.10 mm, −3.56 mm; ML: +0.77 mm, −2.89 mm) and then screwed two stainless steel screws (MX-000120-01SF, Component Supply Company) down to the dura of the skull to stabilize the implantation. We then opened an approximately 4-mm-diameter craniotomy above lobule VI of cerebellar vermis (7.0 mm posterior to bregma, 0.0 mm lateral). We first injected a virus expressing GCaMP8m into the cerebellum (DV: −360 to 200 µm). After virus injection, we gently removed the dura with fine forceps (91197-00, FST). Next, we applied Kwik-Sil (World Precision Instruments) to the border of the craniotomy. We then covered the brain with a 3-mm-diameter coverslip that we attached beneath a 5-mm-diameter coverslip before the experiment using ultraviolet-light-activated epoxy (Norland Optical Adhesive #81, Thermo Fisher Scientific). We fixed the 5-mm-diameter coverslip to the cranium with adhesive cement (S380 Metabond Quick Adhesive Cement System, C&B) and dental cement (Lang Dental). After the cement dried (7–10 min), we transferred the animal to a heated pad for recovery. After full recovery, the mice were returned to their home cage. Verification of microendoscope implantation and GCaMP expression in awake, behaving mice Three weeks after the cannula implantation, we verified the GCaMP7f/8m fluorescence and Ca 2+ activity in awake mice on a custom-designed apparatus to avoid using any general anaesthetics . Mice were head-fixed by clamping (CC-1, Siskiyou) their headbar and were allowed to run on a freely rotating wheel (InnoWheel, Thermo Fisher Scientific). For imaging rACC→Pn neurons, a naked 1.0-mm-diameter gradient refractive index (GRIN) lens probe (1050-004598, Inscopix) was lowered into the implanted cannula using forceps. A miniature microscope (nVoke, Inscopix) was attached to a holder (1050-002199, Inscopix) connected to a goniometer (GN1, Thorlabs) for x – y and y – z plane tilting. The holder was connected to a three-axis micromanipulator for lowering the miniature microscope to the optimal focal plane. Image acquisition software (nVoke, Inscopix) was used to display the incoming image frames in units of relative fluorescence changes (Δ F / F ), enabling observation of Ca 2+ activity in awake, behaving mice. If we observed Ca 2+ transients, we proceeded by mounting the miniature microscope baseplate. Miniature microscope baseplate mounting Mice were anaesthetized with isoflurane (4% for induction and 2% for maintenance) and placed onto the stereotaxic instrument. The GRIN lens probe was fixed in place with ultraviolet-light-curable epoxy (Loctite Light-Activated Adhesive, 4305). The miniature microscope with baseplate attached was stereotaxically lowered toward the top of the GRIN lens probe or coverslips until the brain tissue was in focus. We then adjusted the orientation of the miniature microscope until it was parallel to the surface of the GRIN lens probe. The baseplate was then fixed onto the skull with dental cement. To prevent external light from contaminating the imaging field of view during recording, the outer layer was coated with black nail polish (Black Onyx NL T02, OPI). After attaching the baseplate cover (1050-004639, Inscopix), the mice were transferred to a heated pad for recovery. The mice were then returned to their home cages and then housed individually. Ca 2+ imaging video recording, cell extraction and estimation of firing rates To perform Ca 2+ imaging in mice during PAC, we used an implanted GRIN lens for rACC→Pn neurons or a cranial window for cerebellar Purkinje cells and a miniaturized microscope (nVista, Inscopix). Miniature microscopes were mounted onto the head of the mouse before each behavioural experiment using a custom mounting station. Images were acquired using the Inscopix Data Acquisition Software (IDAS; Inscopix) at a frame rate of 20 Hz. The light-emitting diode intensity was set at 1.5 mW for imaging rACC→Pn neurons or 1 mW for imaging cerebellar Purkinje cell dendrites. A gain of 2 was used for all mice. Before performing cell extraction, we first corrected for brain motion in the videos using the TurboReg motion-correction algorithm. Later, we extracted both the spatial filters and the robust time traces for the regions of interest (ROIs) using EXTRACT , a robust cell extraction routine. We matched the resulting ROIs across days based on Tanimoto similarity, defined as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$T({\bf{x}},{\bf{y}})=\frac{{\bf{x}}\cdot {\bf{y}}}{| | \,{\bf{x}}\,| {| }^{2}+| | \,{\bf{y}}\,| {| }^{2}-{\bf{x}}\cdot {\bf{y}}}$$\end{document} T ( x , y ) = x ⋅ y ∣ ∣ x ∣ ∣ 2 + ∣ ∣ y ∣ ∣ 2 − x ⋅ y , where x and y are vectors corresponding to flattened spatial filters. We adjusted the cut-off for the Tanimoto similarity to each mouse for best results evaluated by visual inspection, then reinitialized EXTRACT with the global cell map. This process allows EXTRACT to find cells that may be missed on different days. We performed this routine of cell extraction, across-day registration and reinitialization for five iterations. We next visually inspected all videos and discarded spurious, duplicate and dendritic ROIs. Lastly, using the verified ROIs that correspond to cells, we performed a final robust regression via EXTRACT to obtain the final traces. We took the z -score of the traces, subtracting the mean and dividing by the s.d., to standardize the trace units across days. To obtain a discrete approximation of the firing rates, we first thresholded the raw traces by a constant value, chosen as 0.5× the maximum value of the trace. The thresholded traces were searched for peak points that were at least 3 frames apart from each other, and then binarized at these peaks (Extended Data Fig. ). The firing rates were obtained by convolving the binarized peaks through a Gaussian kernel with an s.d. of 500 ms. As we thresholded the traces at the beginning and binarized the peaks at the end before convolution, the resulting firing rates did not necessarily contain all of the event times, but should be considered a discrete approximation using the binarized versions of large Ca 2+ events. We chose this approximation of firing rates for its robustness against day-to-day variations in the cell baselines and/or absolute Δ F/F values. Analysis of neural responses during crossing times across days For each condition, the mean activity of each neuron during border crossing times was calculated by averaging the z -scored neural activities (firing rates for Extended Data Fig. ) within a time window beginning 2 s before and ending 2 s after the crossing event. We chose the averaged activities inside the time window instead of performing a point estimation of neural activity. This choice was motivated by our desire to mitigate the measurement errors in crossing times from the behavioural videos and to remain agnostic to the fine structure of the neural code in short time durations, as our interest is in understanding whether the neural population reacts differently, corresponding to the level of pain relief expectation during crossing and crossing back, during each phase of PAC. To determine the extent to which the neural responses of individual cells may differ between first border crossing and crossing back, we computed a discriminability index ( d ′) 2 for each neuron using the equation \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${({d}^{{\prime} })}^{2}=\frac{{\mu }_{{\rm{forward}}}-{\mu }_{{\rm{backward}}}}{{\sigma }_{{\rm{pooled}}}}\,$$\end{document} ( d ′ ) 2 = μ forward − μ backward σ pooled , where μ forward and μ backward are the mean activity of the neuron calculated during first crossing and crossing back, respectively. σ pooled is the pooled s.d. from both conditions. To perform randomized controls for all analyses, we averaged the values of interest (discriminability index, z -scored traces and/or firing rates) with randomly selected crossing times for each neuron 100 times and created a null distribution over the neural population. Classification of Purkinje cells We conducted k -means clustering analysis to categorize Purkinje cells based on their activity during the first border crossing, first crossing back and the last crossing on the post-test day. We extracted the Ca 2+ activity of each Purkinje cell during these events within the 4 s border crossing period for each condition and then concatenated them, resulting in a Ca 2+ activity trace for each Purkinje cell with a total duration of 12 s. Subsequently, we performed silhouette analysis to determine the optimal number of clusters for this dataset, which we found to be two. Accordingly, we then applied k -means clustering to classify all cross-day-aligned Purkinje cells into two clusters. Purkinje cell Ca 2+ spike frequency and amplitude analysis To find Ca 2+ events in Purkinje cells, we first applied a threshold to the raw traces using a constant value set at 0.1 times the maximum amplitude of the trace. The thresholded traces were then scanned for peaks that were at least 3 frames apart from each other and binarized at these peaks. For frequency analysis, we counted the number of Ca 2+ events in the binarized traces of each cell during border crossing and calculated their firing frequency during this 4 s border-crossing period (Extended Data Fig. ). For Ca 2+ spike amplitude analysis, we extracted the value from the z -scored Ca 2+ traces of each cell at the timepoint of detected Ca 2+ events as the amplitude. We then selected these amplitudes during the 4 s border-crossing time for comparison (Extended Data Fig. ). For Ca 2+ spike waveform analysis, we used the z -scored trace of each Purkinje cell to identify the start, peak and end points of each spike that had an amplitude larger than 3 z -scored Δ F/F (Extended Data Fig. ). Histology Tissue collection and processing Mice were transcardially perfused with phosphate-buffered saline (PBS) followed by 4% formaldehyde in PBS. Brains were then dissected, post-fixed in 4% formaldehyde, and cryoprotected in 30% sucrose. Tissues were then frozen in Optimum Cutting Temperature compound (OCT; 4583, Tissue Tek) and sectioned using a cryostat (Leica). The brains were sectioned at 40 μm and stored in PBS at 4 °C if used immediately. For longer storage, tissue sections were placed in glycerol-based cryoprotectant solution and stored at −20 °C. For in situ hybridization, tissues were sectioned at 14 μm, collected on Superfrost Plus slides (22-037-246, Thermo Fisher Scientific) and stored at −80 °C. Immunohistochemistry Tissues were incubated for 1 h and blocked in 0.1 M PBS with 0.3% Triton X-100 (Sigma-Aldrich) plus 5% normal donkey serum. Primary and secondary antibodies were diluted in 0.1 M PBS with 0.3% Triton X-100 plus 1% normal donkey serum. The sections were then incubated overnight at 4 °C in primary antibody solution, washed in 0.1 M PBS with 0.3% Triton X-100 for 40 min, incubated for 2 h in secondary antibody at room temperature and washed in 0.1 M PBS for 40 min. The sections were then mounted using Fluoromount-G (00-4958-02, Thermo Fisher Scientific). Images were acquired on the Zeiss LSM 780 confocal microscope using Zeiss Zen software, running on a Windows PC, in the UNC Neuroscience Microscopy Core. A streptavidin conjugate (Alexa Fluor 594 conjugate, Thermo Fisher Scientific; 1:1,000) was used to visualize biocytin. In situ hybridization For in situ hybridization experiments, we used Advanced Cell Diagnostics RNAscope Technology (ACD Bioscience). In brief, wild-type mice 5–8 weeks old were deeply anaesthetized with 0.1 ml of Euthasol (NDC-051311-050-01, Virbac) and perfused transcardially with 0.1 M PBS followed by 4% formaldehyde solution in PBS. Brains were dissected, cryoprotected in 30% sucrose overnight and then frozen in OCT. Frozen tissue was cut at 20 μm onto Superfrost Plus slides and stored at −80 °C. Tissue was thawed from −80 °C, washed with PBS at room temperature and subsequently processed according to the protocol provided by the manufacturer. We first pretreated the tissue with solutions from the pretreatment kit to permeabilize the tissue, then incubated with protease for 30 min and then with the hybridization probe(s) for another 2 h at 40 °C. Data analysis for the output of TRAPed rACC neurons To quantify the output of neurons in the rACC TRAPed after PAC, we analysed the expression of mRuby in putative presynaptic axonal terminals. First, background subtraction was performed on the mRuby channels of each image using the rolling-ball algorithm in ImageJ. Subsequently, the images were thresholded to a value 4 times the mean background intensity, converting them into binary format. The pixel densities of these binary images were then calculated and normalized to the size of the specific regions displaying mRuby expression. Electrophysiology ACC slice preparation We began the PAC paradigm 3–4 weeks after virus injection. Mice were euthanized immediately after the conditioning phase of PAC. After decapitation, the brain was rapidly collected and immersed in ice-cold slicing solution containing 87 mM NaCl, 25 mM NaHCO 3 , 2.5 mM KCl, 1.25 mM NaH 2 PO 4 , 10 mM d -glucose, 75 mM sucrose, 0.5 mM CaCl 2 and 7 mM MgCl 2 (pH 7.4 in 95% O 2 and 5% CO 2 , 325 mOsm). Coronal brain slices 300 μm thick and containing the rACC were cut using a VT1200 vibratome (Leica Microsystems). After around 20 min incubation at 35 °C, the slices were stored at room temperature. Slices were then transferred to the chamber for electrophysiological recording. Slices were used for a maximum of 5 h after dissection. The experiments were performed at 21–24 °C. During the experiment, slices were superfused with a physiological extracellular solution containing 125 mM NaCl, 2.5 mM KCl, 25 mM NaHCO 3 , 1.25 mM NaH 2 PO 4 , 25 mM d -glucose, 2 mM CaCl 2 , and 1 mM MgCl 2 (pH 7.4 in 95% O 2 and 5% CO 2 , ~325 mOsm). Whole-cell patch recording of rACC→Pn neurons was performed as described previously . The pipettes (1B150F-4, WPI) were formed using a P-97 puller (Sutter Instruments). The resistance was 3–5 MΩ. Measuring action potential properties, spontaneous release and LTP induction The intracellular solution used for testing the action potential firing properties, spontaneous release and LTP induction of rACC→Pn neurons contained 135 mM K-gluconate, 20 mM KCl, 0.1 mM EGTA, 2 mM MgCl 2 , 2 mM Na 2 ATP, 10 mM HEPES and 0.3 mM Na 3 GTP (pH adjusted to 7.28 with KOH, ~310 mOsm); in a subset of recordings, 0.2% biocytin was added. To measure membrane properties and evoke action potential firing of rACC→Pn neurons, a 1 s step current (−50, 0, 50, 100, 150, 200, 250, 300 pA) was injected into the cell through the recording pipette. Spontaneous EPSCs were recorded while holding the rACC→Pn neurons at −70 mV. For LTP induction, biphasic electrical stimulations (5–8 V, 100 ms) were delivered by placing a borosilicate theta glass (2.0 mm, Warner Instruments) in layer II/III of the rACC. The glass was pulled using a vertical pipette puller and filled with perfusion solution. The fibre was stimulated using the DS4 Bi-Phasic Current Stimulator (Digitimer) at 0.02 Hz to measure evoked EPSCs of the rACC→Pn neurons for 6 min as the baseline. TBS (5 trains of burst with 4 pulses at 100 Hz, at 200 ms intervals, repeated 4 times at intervals of 10 s) was then administered to induce LTP. After LTP induction, evoked EPSCs were recorded for another 30 min to compare against the baseline. No blocker was used to block inhibitory synaptic inputs. Measuring AMPA/NMDA ratio, PPR and feedforward inhibition The Cs + -based intracellular solution used to measure the AMPA/NMDA ratio and PPR contained 130 mM Cs-methanesulfonate, 2 mM KCl, 10 mM EGTA, 2 mM MgCl 2 , 2 mM Na 2 ATP, 10 mM HEPES and 5 mM QX-314 (pH adjusted to 7.28 with CsOH, ~310 mOsm). To evoke synaptic response of rACC→Pn neurons, electrical stimulation (50–80 μA, 100 μs) was delivered by placing a concentric bipolar electrode (FHC) in layer II/III of the rACC. The selective GABA A receptor antagonist SR-95531 (10 μM; Sigma-Aldrich) was used to block IPSCs. To record EPSCs mediated by both AMPA and NMDA receptors, membrane potentials were held at voltages increasing from −80 mV to +60 mV. To measure the PPR, two electrical stimulations at different time intervals (20, 50, 100, 200, 500 ms) were used to evoke synaptic transmission. The membrane potential was set to −30 mV to record both EPSCs and IPSCs in the same trace (Extended Data Fig. ) or to either −70 mV or +10 mV to examine EPSCs or IPSCs in isolation (Fig. ). Measuring inhibitory input from PV + interneurons Slices from Pvalb cre mice were prepared as described above. An intracellular solution containing high chloride concentration (140 mM KCl, 10 mM EGTA, 2 mM MgCl 2 , 2 mM ATP, 10 mM HEPES and 2 mM QX-314; pH adjusted to 7.28 with KOH; 313 mOsm) was used for postsynaptic recordings of rACC→Pn neurons, which were conducted in the voltage-clamp configuration with a holding potential of −70 mV. For all voltage-clamp recordings, we applied hyperpolarizing test pulses (5 mV, 100 ms) to monitor series and input resistance throughout the entire experiment. Data from experiments in which series resistance changed more than 15% were discarded. Data acquisition and analysis Electrophysiological data were acquired using the Multiclamp 700b amplifier (Axon Instruments), low-pass filtered at 10 kHz, and sampled at 20 or 50 kHz using the Digidata 1440A low-noise digitizer (Axon Instruments). Stimulation and data acquisition were performed using Clampfit 10 software (Axon Instruments). Data were analysed using Stimfit v.0.14.9 ( https://github.com/neurodroid/stimfit ), Clampfit v.11.2 (Molecular Devices) and R v.4.0.3 (The R Project for Statistical Computing). sEPSCs were detected using a template-matching algorithm and verified by visual inspection . The location at which the peak EPSC was recorded while holding the membrane potential at −80 mV was used to measure the amplitude of AMPAR EPSCs. The amplitude of NMDAR EPSCs was measured 50 ms after the electrical stimulation. The synaptic latency of monosynaptic EPSCs or IPSCs was measured from the onset of the electrical stimulus to the onset of the EPSC or IPSC. The disynaptic IPSC delay (Fig. ) was measured from the onset of the EPSC at −70 mV to the onset of the IPSC at +10 mV. Cannula implantation and optogenetic manipulations For fibreoptic cannula implantation surgeries, mice were anaesthetized with isoflurane (4% for induction and 2% for maintenance) while the body temperature was maintained using a heating pad. After cranial hair removal, skin sterilization and scalp incision, we bilaterally injected a virus encoding inhibitory or excitatory opsin into the ACC using the coordinates described above for manipulating ACC terminals in the Pn. To manipulate the activity of Oprd1 + cells in the Pn, we bilaterally injected a virus encoding an inhibitory opsin into the Pn at the coordinates AP: −4.0 mm, ML: ±0.4 mm, DV: −5.4/−5.8 mm. To manipulate the activity of Oprd1 + cells in the Pn that receive rACC inputs, we bilaterally injected the AAV1-Flpo virus into the rACC, then another virus into the Pn to express an inhibitory opsin in a Cre- and Flp-dependent manner. After virus injection, we performed small craniotomies in three locations (AP: +5.10, −1.06, −3.56 mm; ML: −0.77, +2.87, −3.13 mm). Next, to stabilize the implantation, three stainless steel screws (MX-000120-01SF, Component Supply Company) were drilled into the dura of the skull. We then performed two additional craniotomies at the coordinates AP: −4.0 mm, ML: ±1.2 mm. The cannula (CFMLC12L05, Thorlabs or RWD) was then attached to a holder (David Kopf Instruments) and lowered at 10° to the coordinates AP: +4.0 mm, ML: ±1.2 mm, DV: −4.9 mm. Blood and debris around the craniotomy were quickly removed and adhesive cement (S380 Metabond Quick Adhesive Cement System, C&B) was used to seal the gap between the cannula and skull. A custom-designed laser-cut headbar (18–24 G stainless steel, LaserAlliance) was placed over the left posterior skull screw, then layers of dental cement were applied (Lang Dental) to affix both the cannula and headbar to the skull. After the cement dried (7–10 min), we transferred the animal to a heated pad until full recovery, then to their home cage. For optogenetic photostimulation of inhibitory (eNpHR3.0) or excitatory (ChR2) opsins, ferrules were connected to a 561 nm (yellow) laser diode (MGL-FN-561, Opto Engine) or 494 nm (blue) laser diode (MBL-III-473, Opto Engine LLC) using a FC/PC adaptor and a fibreoptic rotary joint (Thorlabs). The laser output was controlled using a shutter controller (SR470, Stanford Research System), which delivered yellow light continuously for the inhibitory opsin and 4 ms blue light pulses at 20 Hz for the excitatory opsin. Light output through the optical fibres was adjusted to ~5 mW at the tip of the optical fibre for inhibition and ~10 mW at the tip of the optical fibre for excitation. Behavioural tests For all behavioural assays described below, mice were acclimatized to the researcher and testing environment for at least 30 min before testing. PAC assay to induce and evaluate placebo analgesia The PAC apparatus consists of two adjacent and visually distinct chambers, using two separate thermal plates (BIOSEB) as the floor. PAC is a 7 day behavioural assay consisting of three phases: habituation (days 1–2) and pre-test (day 3), conditioning (days 4–6), and post-test (day 7; Fig. ). During the habituation and pre-test phases, the floors of both chambers are set at 30 °C and the mice are free to explore both compartments for 3 min; their performance on the pre-test day is compared with their performance on the post-test day. During the conditioning phase, the floor of the chamber on which the mouse begins the session (chamber 1) is set at 48 °C. Mice progressively learn that chamber 1 is painful and to associate chamber 2, which remains at 30 °C, with pain relief. On the post-test day, the floors of both chambers are set at 48 °C to evaluate any analgesic effect induced by the expectation of pain relief. The performance of mice was recorded for 3 min using a camera (acA1300, Basler) controlled by MATLAB (R2019b, MathWorks). The recorded videos were analysed using the machine-learning-based algorithm DeepLabCut or Ethovision XT15 (Noldus). We quantified and compared the latency of border crossings, time spent in each chamber and nocifensive behaviours (licking, rearing, jumping) of conditioned and unconditioned mice (Fig. ). Naloxone injection To investigate whether endogenous opioid activity is necessary for PAC-induced placebo analgesia, we injected mice with saline or naloxone (N7758, Sigma-Aldrich) intraperitoneally (5 mg per kg) during the conditioning phase (day 4 to 6; Extended Data Fig. ) or before the post-conditioning test on day 7 (Extended Data Fig. ). After injection, the mice were returned to their home cage for at least 30 min to reduce injection-induced stress. Saline-injected mice were used as controls. TRAP of rACC neurons during PAC Two weeks after virus injection, TRAP2 mice were subjected to an adjusted PAC assay (30 min conditioning phase on days 4–6 instead of 3 min) to label the rACC neurons encoding expectation of pain relief. TRAP2 mice were injected with 4-hydroxytamoxifen (50 mg per kg, subcutaneous) on the last day of the conditioning phase (day 6) immediately before conducting the PAC trial. After injection, the mice were allowed to remain in the PAC apparatus for 30 min, and then returned to their home cages. Then, 2 weeks later, we perfused the mice and dissected the brains to determine synaptophysin–mRuby expression in the rACC and other brain areas. Mice that underwent the same procedure but with both chambers set at 30 °C were used as controls. Pin prick To examine the Ca 2+ activity of rACC→Pn neurons during noxious mechanical stimulation (Extended Data Fig. ), we gently touched the plantar surface of the hindpaw with a 25 G needle 10 times at an interval of around 30 s (Extended Data Fig. ). As a control, a needle with a blunt end was used to measure the Ca 2+ activity of rACC→Pn neurons during innocuous mechanical stimulation. The entire procedure was recorded using a camera (acA1300, Basler) controlled by MATLAB (R2019b, MathWorks) and synchronized with the miniscope. Hindpaw radiant heat (Hargreaves) test To examine the Ca 2+ activity of rACC→Pn neurons during noxious thermal stimulation (Extended Data Fig. ), we used the Hargreaves test. Mice were placed in plastic chambers on a glass surface heated to 25 °C, through which a radiant heat source (Department of Anesthesiology, UC San Diego) could be focused onto the hindpaw. We recorded the performance of mice using a camera (acA1300, Basler) controlled by MATLAB (R2019b, MathWorks) and synchronized with the miniscope. Von Frey withdrawal threshold test Eight von Frey filaments (Stoelting), ranging from 0.007 to 6.0 g were used to assess mechanical withdrawal thresholds. Filaments were applied perpendicular to the ventral–medial hindpaw surface with sufficient force to cause a slight bending of the filament. A positive response was characterized by a rapid withdrawal of the paw away from the stimulus fibre within 4 s. The up–down method was used to determine the mechanical threshold (50% withdrawal threshold) . Von Frey withdraw frequency test To evaluate mechanical sensitivity, we used six von Frey filaments (0.07, 0.16, 0.4, 1.0, 1.4 and 6.0 g). Filaments were applied perpendicular to the ventral–medial hindpaw surface with sufficient force to cause a slight bending of the filament. Each filament was applied for 1 s. A positive response was characterized by a rapid and immediate withdrawal of the paw away from the filament. Each filament was applied five times. The frequency of reflexive withdrawal responses was calculated. Hotplate test Mice were acclimatized to the testing environment as described above. The plate temperature was set at 48 °C or 52 °C to measure thermal pain threshold. The mouse was placed onto the plate and the latency preceding licking and/or biting of a hindpaw was scored. To prevent tissue damage, a cut-off of 3 min or 1 min was set for the 48 °C and 52 °C plates, respectively. Formalin test An intraplantar injection (20 µl) of 2.5% formalin was performed in the left hindpaw of mice after the conditioning phase of PAC. The mouse behaviour was recorded for 30 min within the PAC apparatus or using a four-camera set-up enabling synchronized capture of each lateral angle. The time spent licking the injected hindpaw was scored using Ethovision XT15 (Noldus) or automatically scored using DeepEthogram, an unbiased, pixel-based machine learning algorithm . scRNA-seq and snRNA-seq Sample preparation, library generation and sequencing For low-throughput, high-depth scRNA-seq, we used the SMART-seq v4 Ultra Low Input RNA Kit for Sequencing (SSv4; TakaraBio) as described previously . To focus our high-depth analysis on neurons, we used Snap25-IRES2-cre;Ai14 ( Snap25-tdT ) mice, which express the fluorescent reporter tdTomato in neurons. The Pn was microdissected from two 8-week-old Snap25 -tdT mice (one male and one female). The mice were anaesthetized with isoflurane and perfused with artificial cerebrospinal fluid comprising CaCl 2 (0.5 mM), glucose (25 mM), HCl (96 mM), HEPES (20 mM), MgSO 4 (10 mM), NaH 2 PO 4 (1.25 mM), myo-inositol (3 mM), N -acetylcysteine (12 mM), NMDG (96 mM), KCl (2.5 mM), NaHCO 3 (25 mM), sodium l -ascorbate (5 mM), sodium pyruvate (3 mM), taurine (0.01 mM) and thiourea (2 mM), bubbled with carbogen (95% O 2 and 5% CO 2 ). The Pn was microdissected and embedded in 2% agarose, sliced into 250 µm sections with a vibratome, then subjected to enzymatic digestion with pronase (1 mg ml −1 ) for 70 min at room temperature and triturated using fire-polished Pasteur pipettes to generate single-cell suspensions. Live single Pn neurons were isolated into eight-well strips containing SSv4 lysis buffer based on DAPI − tdTomato + using fluorescence-activated cell sorting, then stored at −80 °C. To prepare single-cell transcriptome libraries, polyadenylated RNAs were reverse transcribed into full-length cDNA and subjected to 18 PCR amplification cycles according to the SSv4 protocol. Single-cell libraries were indexed and prepared for Illumina sequencing using the Nextera XT DNA Library Preparation Kit. Multiplexed libraries were sequenced on the HiSeq 2500 sequencers to generate 100 bp paired-end reads at a depth of 2.5 million reads per cell. Single-cell FastQ files were aligned to the mm10 mouse genome (GRCm38) using STAR (v.2.7.3a) . For high-throughput, low-depth single-nucleus RNA-seq (snRNA-seq) we used the 10x Chromium 3′ V3 System (10x Genomics). We microdissected the Pn from two 8-week-old female C57BL/6J mice. Pn tissues were pooled and flash-frozen on dry ice. Single-nucleus isolation was performed as described previously . Tissue was placed into a prechilled Dounce homogenizer (Kimble) containing 500 µl chilled detergent lysis buffer (0.10% Triton X-100, 0.32 M sucrose, 10 mM HEPES (pH 8.0), 5 mM CaCl 2 , 3 mM MgAc, 0.1 mM EDTA, 1 mM dithiothreitol (DTT)). Tissue was homogenized by five strokes with the ‘loose’ pestle followed by ten strokes with the ‘tight’ pestle (Kimble). Then, 1 ml of sucrose buffer (0.32 M sucrose, 10 mM HEPES (pH 8.0), 5 mM CaCl 2 , 3 mM MgAc, 0.1 mM EDTA, 1 mM DTT) was added to the Dounce homogenizer and the combined solution was passed through a 40 μm cell strainer into a fresh tube containing 1 ml of 0.32 M sucrose buffer. An additional 1 ml of 0.32 M sucrose buffer was passed through the filter and the resulting 3.5 ml solution was centrifuged at 3,200 g for 10 min at 4 °C. The pellet was resuspended with 3 ml of 0.32 M sucrose buffer and homogenized for 30 s (Ultra-Turrax disperser, setting 1). Next, 12.5 ml of 1 M sucrose buffer (1 M sucrose, 10 mM HEPES (pH 8.0), 3 mM MgAc, 1 mM DTT) was pipetted beneath the homogenate and the tube was centrifuged at 3,200 g for 20 min at 4 °C. After decanting the supernatant, the pellet was resuspended in 1 ml of resuspension solution (0.4 mg ml −1 BSA, 0.2 U μl −1 RNase inhibitor (Lucigen) in 1× PBS), filtered through a 35 µm cell strainer and diluted to a final concentration of 225 cells per µl. Single-nucleus suspensions were loaded onto two 10x Genomics chips (Chromium v3). snRNA-seq libraries were constructed according to the protocol provided by the manufacturer. Multiplexed snRNA-seq libraries were spiked with a PhiX control library (5%) and sequenced across two NextSeq 550 high-output flow-cell runs. Raw sequencing files were aligned to the mm10 mouse genome (GRCm38) and converted to gene expression matrices using the Cell Ranger pipeline (Cell Ranger v.5.0.1, default parameters). Intronic reads were included to increase assay sensitivity. Normalization, clustering and differential gene expression scRNA-seq data were analysed using Seurat (v.4.0) . For 10x Genomics datasets, nuclei expressing fewer than 200 genes and genes expressed in fewer than 5 nuclei were removed. For SSv4 data, cells expressing fewer than 1,000 genes and genes expressed in fewer than 5 cells were removed. To focus our analysis on neurons, we performed broad preliminary clustering to define principal cell types and remove cells and nuclei that lacked expression of neuronal genes ( Snap25 and Rbfox3 ) or expressed conventional glial cell markers ( Mbp , Pdgfra , Gfap , Csf1r and Pecam1 ). The final datasets comprised 4,720 neuronal nuclei from 10x experiments (8,669 median transcripts per cell; 3,816 median genes per cell) and 212 neuronal cells from SSv4 experiments (481,098 median transcripts per cell; 9,956 median genes per cell). Each scRNA-seq dataset was normalized and transformed to a common scale separately using SCTransform with the following parameters: n cells = half the total number of cells; variable.features = median number of genes expressed per cell. The resulting datasets were integrated by SCT-Pearson residuals usinng Seurat’s FindIntegrationAnchors and IntegrateData functions using the default parameters. We determined which principal components to use in subsequent clustering analyses by manually evaluating which principal components contributed to substantial variation (ElbowPlot function in Seurat). To increase cluster robustness, the optimal nearest neighbour parameter ( k ) was identified by iterating through nearest-neighbour values (FindNeighbors function in Seurat) and calculating the average silhouette score . The k -nearest-neighbour value yielding the highest average silhouette score was used for Louvain clustering. Pairs of clusters that could not be reliably distinguished by a single gene using a binomial test ( q < 0.01; log-effect size > 2.0) were dissolved and cells reassigned to the nearest cluster based on Euclidean distance in principal component (PC) space . An initial round of clustering using this method was performed to detect principal cell types (for example, neurons, microglia, astrocytes). A subsequent round of clustering was performed on neuronal principal cell types based on enrichment of neuron-specific genes ( Snap25 , Rbfox3 ) and neurotransmitter vesicular transporters ( Slc17a6 , Slc17a7 , Slc32a1 ). Cell-type-specific marker genes were identified using a binomial test to determine which genes are expressed in cells within a given cluster compared to all other cells . The expression frequency of a given gene ( g ) expressed in a specific cell population ( N ) was compared to the expression frequency in the remaining population ( M ). Thus, the P value for this test was calculated as follows: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${p}_{g}={\sum }_{k={N}_{g}}^{N}C(N,k)\gamma \wedge (k)(1-\gamma )\wedge (N-k)$$\end{document} p g = ∑ k = N g N C ( N , k ) γ ∨ ( k ) ( 1 − γ ) ∨ ( N − k ) , where γ is the proportional frequency of cells expressing the gene of interest ( M g / M ). A complete list of cluster-specific marker genes is provided in Supplementary Table . Statistics and reproducibility Statistical analysis was performed using R v.4.0.3 (The R Project for Statistical Computing). All values are reported as mean ± s.e.m. Statistical significance was tested using two-sided Wilcoxon rank-sum tests, two-sided Wilcoxon matched-pairs signed-rank tests or one- or two-way ANOVA with Tukey post hoc test. P < 0.05 was considered to be significant. P values between 0.05 and 0.1 were noted in the figures. In experiments involving electrical fibre stimulation, stimulation artifacts were blanked for display purposes. In Figs. and , two mice were examined in each group, and similar results were generated. In Extended Data Figs. and , three independent repeats were performed with similar results and representative images were shown. In Extended Data Figs. and , two independent repeats were performed with similar results. Reporting summary Further information on research design is available in the linked to this article. All of the procedures were performed according to animal care guidelines approved by the Administrative Panel on Laboratory Animal Care (APLAC) of Stanford University, by the Institutional Animal Care and Use Committee (IACUC) of the University of North Carolina at Chapel Hill and by the International Association for the Study of Pain. Mice were housed at a maximum of 5 mice per cage and maintained under a 12 h–12 h light–dark cycle in a temperature-controlled environment with ad libitum access to food and water. Male or female mice with an age range of 8–12 weeks were used for the experiments. C57BL/6 wild-type (000664), TRAP2 ( Fos CreERT2 , 030323) and Pvalb cre (017320) mice were purchased from Jackson Laboratory. Oprd1 cre mice were generated at the Stanford Transgenic Research Center using standard gene targeting procedures. In brief, an IRES-cre cassette was introduced immediately following the Oprd1 stop codon through homologous recombination. After electroporation of the targeting construct into 129Sv/SvJ-derived ES cells, neomycin-resistant ES cell colonies were screened for IRES-cre insertion using long-range PCR and Southern blotting. Flp transfection was then performed to remove the FRT-flanked neomycin-resistance gene. Confirmed neomycin-excised colonies were then injected into C57BL/6 blastocysts to generate chimeric males, which were bred to C57BL/6 females to generate founders. Sample sizes for mouse behaviour and electrophysiology experiments were determined using the power analysis (‘pwr’ R package). Specifically, the function ‘pwr.t.test’ was used, with a significance level of 0.05, a power of 0.80, and effect sizes estimated from pilot experiments and/or previous studies using similar methods. For the histology experiment, samples were allocated by randomly selecting brain slices containing regions of interest. For all other experiments, samples and animals were randomized into groups. Experimenters were blinded to experimental groups before and during all mouse behavior experiments. Calcium imaging data were analyzed by two independent researchers using two different analysis pipelines at two universities. 4-OHT (H6278, Sigma-Aldrich) was prepared in absolute ethanol and Kolliphor EL (C5135, Sigma-Aldrich) and administered intraperitoneally (50 mg per kg). Naloxone (N7758, Sigma-Aldrich) was prepared in saline and administered intraperitoneally (5 mg per kg). To express GCaMP7f in rACC→Pn projection neurons for Ca 2+ imaging, we intracranially injected 400 nl of rAAV2-retro-hSyn-Cre-WPRE-hGH (105553-AAVrg; Addgene; titre: 7 × 10 12 viral genomes (vg) per ml) into the right Pn at the coordinates anteroposterior (AP): −4.0 mm, mediolateral (ML): +0.4 mm, dorsoventral (DV): −5.4/−5.8 mm and 400 nl of AAV1-syn-FLEX-jGCaMP7f-WPRE (104492-AAV1; Addgene; titre: 1 × 10 13 vg per ml) into the right ACC (AP: +0.75 mm, ML: +0.5 mm, DV: −1.75 mm). To express GCaMP6s in ACC IT neurons, we injected 400 nl of AAVretro-EF1a-Flpo (55637-AAVrg; Addgene; titre: 2.2 × 10 13 vg per ml) into the left dorsomedial striatum at the coordinates AP: +0.2 mm, ML: −2.0 mm, DV: −4.1 mm and 400 nl of AAV8-EF1a-fDIO-GCaMP6s (105714-AAV8; Addgene; titre: 1.8 × 10 13 vg per ml) into the right ACC. To express GCaMP8m in cerebellar Purkinje cells for Ca 2+ imaging, we intracranially injected 200 nl of AAV1-CamKIIa-JGCaMP8m (176751-AAV1; Addgene; titre: 1.8 × 10 13 vg per ml) into lobule VI of the vermis at the coordinates AP: −7.0 mm, ML: +0.0 mm, DV: −360 and −200 µm. To trace the output of rACC neurons active during PAC, we intracranially injected 400 nl of AAV-DJ-hSyn-FLEX-mGFP-2A-Synaptophysin-mRuby (Stanford Virus Core; titre: 1.2 × 10 13 vg per ml) into the rACC of TRAP2 ( Fos CreERT2 ) mice at the coordinates AP: +0.75 mm, ML: +0.5 mm, DV: −1.75 mm. To optogenetically manipulate the activity of the rACC→Pn pathway, we intracranially injected 400 nl of either AAV5-hSyn-eNpHR-eYFP (UNC Vector Core; titre: 4.4 × 10 12 vg per ml) or AAV5-hSyn-ChR2-eYFP (UNC Vector Core; titre: 5.3 × 10 12 vg per ml) into the ACC of WT mice, and 400 nl of AAV5-EF1a-DIO-eNpHR-eYFP (UNC Vector Core; titre: 4.5 × 10 12 vg per ml) into the Pn of Oprd1 cre mice. To optogenetically manipulate the activity of the Oprd1 + Pn neurons receiving monosynaptic inputs from rACC neurons, we intracranially injected in Oprd1 cre mice 400 nl of AAV1-EF1a-Flpo (55637-AAV; Addgene; titre: 2.3 × 10 13 ) into the rACC bilaterally, and then 400 nl AAV8-nEF-Con/Fon-NpHR-EYFP (137152-AAV8; Addgene; titre: 2.6 × 10 13 ) into the Pn bilaterally. To trace the output of ACC and secondary motor cortex projections to the Pn, we intracranially injected 400 nl of AAV5-hSyn-eGFP (UNC Vector Core; titre: 4 × 10 12 vg per ml) into the ACC (AP: +0.75 mm, ML: +0.5 mm, DV: −1.75 mm) and 400 nl of rAAV5-hsyn-chrimsonr-tdT (UNC Vector Core; titre: 4.6 × 10 12 vg per ml) into the secondary motor cortex (AP: +1.41 mm, ML: +0.75 mm, DV: −1.5 mm). To label rACC→Pn neurons for electrophysiological recording, we intracranially injected 400 nl of AAVretro-CAG-tdTomato (59462-AAVrg; titre: 1.2 × 10 12 vg per ml) into the right Pn at the coordinates listed above. To measure the feedforward inhibition of rACC→Pn neurons by PV + interneurons, in addition to the AAVretro-CAG-tdTomato virus injected into the Pn, we injected 400 nl of AAV5-DIO-EF1a-ChR2-eYFP (UNC Vector Core; titre: 4.0 × 10 12 vg per ml) into the rACC to express ChR2 in PV + interneurons. To trace the output of Pn neurons that receive inputs from the ACC, we injected 300 nl of the anterograde transsynaptic virus AAV1-hSyn-cre-WPRE-hGH (Addgene; titre: 1 × 10 13 vg per ml) into the ACC , then injected 400 nl of AAV-DJ-hSyn-FLEX-mGFP-2A-Synaptophysin-mRuby (Stanford Virus Core; titre: 1.2 × 10 13 vg per ml) into the Pn. The coordinates for the ACC and Pn injections were the same as for the Ca 2+ imaging experiment. To trace brain areas forming monosynaptic connections with Oprd1 + Pn neurons, we injected the AAV helper viruses AAV-FLEx-TVA-Mkate and AAV-FLEx-G into the Pn of Oprd1 cre mice. Then, 3 weeks later, a recombinant rabies virus (RVdG) was injected into the Pn. Next, 1 week after injection of the rabies virus, the brains were collected. The labelling efficiency of viruses is governed by a multifaceted interplay of factors, including the virus’s serotype and titre, the selected promoter, and the targeted cell types. Thus, the labelling efficiency of the virus used in this study cannot be consistently quantified. All surgeries were performed under aseptic conditions. Animals were anaesthetized using isoflurane (07-893-8441, Patterson Veterinary). After anaesthesia induction with 4% isoflurane in a chamber, animals were transferred to a small-animal digital stereotaxic instrument (David Kopf Instruments) and anaesthesia was maintained using 2% isoflurane. Injections were performed using a calibrated microcapillary tube (P0549, Sigma-Aldrich) and pulled with a P-97 micropipette puller (Sutter Instruments). The viral reagents were aspirated into the tube using negative pressure and delivered at a rate of around 50 nl min −1 using positive pressure. After injection, the tube was raised ~100 µm and held stationary for an additional 10 min to allow diffusion of the virus, and then slowly withdrawn at a rate of 0.05 mm s −1 . After surgery, the mice were transferred to a warm chamber until they had fully recovered, and were then returned to their home cage. Microendoscope or optical cannula implantation Microendoscope implantation in the ACC was performed as described previously , . In brief, we stereotaxically implanted a stainless steel cannula 1 week after viral injection. The cannula was fabricated with 18-G 304 S/S Hypodermic Tubing, custom cut to pieces 4.3 mm in length (Ziggy’s Tubes and Wires) and attached at one end to a Schott Glass 2 mm in diameter and 0.1 mm thick (TTL) using an optical adhesive (Norland Optical Adhesive #81, Thermo Fisher Scientific). After grinding away the excess glass using a polisher, the cannulas were carefully stored until use in implantation surgeries. For cannula implantation surgeries, mice were anaesthetized with isoflurane (4% for induction and 2% for maintenance) while the body temperature was maintained using a heating pad. After cranial hair removal, skin sterilization and scalp incision, we performed small craniotomies in three locations (AP: +5.10 mm, −3.56 mm, −3.56 mm; ML: −0.77mm, +2.06 mm, −3.01 mm) and then screwed three stainless steel screws (MX-000120-01SF, Component Supply Company) down to the dura of the skull to stabilize the implantation. We then performed a fourth craniotomy using a drill (Model EXL-M40, Osada) and a 1.4 mm round drill burr (19007-14, FST) at the coordinates AP: +0.75 mm, ML: +1.25 mm. The bone fragments and other detritus were cleared away from the opening using sterilized forceps. To prevent any increase in intracranial pressure and improve image quality, we aspirated away the overlying tissue down to approximately DV: −1.0 mm at an angle of 18°. The custom cannula was then attached to a holder (David Kopf Instruments) and lowered to AP: +0.75 mm, ML: +1.25 mm, DV: −1.8 mm at 18°. Blood and additional debris around the craniotomy were quickly removed and adhesive cement (S380 Metabond Quick Adhesive Cement System, C&B) was applied to seal the gap between the cannula and the skull. A custom-designed laser-cut headbar (18–24 G stainless steel, LaserAlliance) was placed over the left posterior skull screw, then layers of dental cement (Lang Dental) were applied to affix both the cannula and headbar to the skull. After the cement dried (7–10 min), we transferred the animal to a heated pad for recovery. After full recovery, the mice were returned to their home cage. Placement of the cranial window After cranial hair removal, skin sterilization and scalp incision, we performed small craniotomies in two locations (AP: +5.10 mm, −3.56 mm; ML: +0.77 mm, −2.89 mm) and then screwed two stainless steel screws (MX-000120-01SF, Component Supply Company) down to the dura of the skull to stabilize the implantation. We then opened an approximately 4-mm-diameter craniotomy above lobule VI of cerebellar vermis (7.0 mm posterior to bregma, 0.0 mm lateral). We first injected a virus expressing GCaMP8m into the cerebellum (DV: −360 to 200 µm). After virus injection, we gently removed the dura with fine forceps (91197-00, FST). Next, we applied Kwik-Sil (World Precision Instruments) to the border of the craniotomy. We then covered the brain with a 3-mm-diameter coverslip that we attached beneath a 5-mm-diameter coverslip before the experiment using ultraviolet-light-activated epoxy (Norland Optical Adhesive #81, Thermo Fisher Scientific). We fixed the 5-mm-diameter coverslip to the cranium with adhesive cement (S380 Metabond Quick Adhesive Cement System, C&B) and dental cement (Lang Dental). After the cement dried (7–10 min), we transferred the animal to a heated pad for recovery. After full recovery, the mice were returned to their home cage. Verification of microendoscope implantation and GCaMP expression in awake, behaving mice Three weeks after the cannula implantation, we verified the GCaMP7f/8m fluorescence and Ca 2+ activity in awake mice on a custom-designed apparatus to avoid using any general anaesthetics . Mice were head-fixed by clamping (CC-1, Siskiyou) their headbar and were allowed to run on a freely rotating wheel (InnoWheel, Thermo Fisher Scientific). For imaging rACC→Pn neurons, a naked 1.0-mm-diameter gradient refractive index (GRIN) lens probe (1050-004598, Inscopix) was lowered into the implanted cannula using forceps. A miniature microscope (nVoke, Inscopix) was attached to a holder (1050-002199, Inscopix) connected to a goniometer (GN1, Thorlabs) for x – y and y – z plane tilting. The holder was connected to a three-axis micromanipulator for lowering the miniature microscope to the optimal focal plane. Image acquisition software (nVoke, Inscopix) was used to display the incoming image frames in units of relative fluorescence changes (Δ F / F ), enabling observation of Ca 2+ activity in awake, behaving mice. If we observed Ca 2+ transients, we proceeded by mounting the miniature microscope baseplate. Miniature microscope baseplate mounting Mice were anaesthetized with isoflurane (4% for induction and 2% for maintenance) and placed onto the stereotaxic instrument. The GRIN lens probe was fixed in place with ultraviolet-light-curable epoxy (Loctite Light-Activated Adhesive, 4305). The miniature microscope with baseplate attached was stereotaxically lowered toward the top of the GRIN lens probe or coverslips until the brain tissue was in focus. We then adjusted the orientation of the miniature microscope until it was parallel to the surface of the GRIN lens probe. The baseplate was then fixed onto the skull with dental cement. To prevent external light from contaminating the imaging field of view during recording, the outer layer was coated with black nail polish (Black Onyx NL T02, OPI). After attaching the baseplate cover (1050-004639, Inscopix), the mice were transferred to a heated pad for recovery. The mice were then returned to their home cages and then housed individually. Ca 2+ imaging video recording, cell extraction and estimation of firing rates To perform Ca 2+ imaging in mice during PAC, we used an implanted GRIN lens for rACC→Pn neurons or a cranial window for cerebellar Purkinje cells and a miniaturized microscope (nVista, Inscopix). Miniature microscopes were mounted onto the head of the mouse before each behavioural experiment using a custom mounting station. Images were acquired using the Inscopix Data Acquisition Software (IDAS; Inscopix) at a frame rate of 20 Hz. The light-emitting diode intensity was set at 1.5 mW for imaging rACC→Pn neurons or 1 mW for imaging cerebellar Purkinje cell dendrites. A gain of 2 was used for all mice. Before performing cell extraction, we first corrected for brain motion in the videos using the TurboReg motion-correction algorithm. Later, we extracted both the spatial filters and the robust time traces for the regions of interest (ROIs) using EXTRACT , a robust cell extraction routine. We matched the resulting ROIs across days based on Tanimoto similarity, defined as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$T({\bf{x}},{\bf{y}})=\frac{{\bf{x}}\cdot {\bf{y}}}{| | \,{\bf{x}}\,| {| }^{2}+| | \,{\bf{y}}\,| {| }^{2}-{\bf{x}}\cdot {\bf{y}}}$$\end{document} T ( x , y ) = x ⋅ y ∣ ∣ x ∣ ∣ 2 + ∣ ∣ y ∣ ∣ 2 − x ⋅ y , where x and y are vectors corresponding to flattened spatial filters. We adjusted the cut-off for the Tanimoto similarity to each mouse for best results evaluated by visual inspection, then reinitialized EXTRACT with the global cell map. This process allows EXTRACT to find cells that may be missed on different days. We performed this routine of cell extraction, across-day registration and reinitialization for five iterations. We next visually inspected all videos and discarded spurious, duplicate and dendritic ROIs. Lastly, using the verified ROIs that correspond to cells, we performed a final robust regression via EXTRACT to obtain the final traces. We took the z -score of the traces, subtracting the mean and dividing by the s.d., to standardize the trace units across days. To obtain a discrete approximation of the firing rates, we first thresholded the raw traces by a constant value, chosen as 0.5× the maximum value of the trace. The thresholded traces were searched for peak points that were at least 3 frames apart from each other, and then binarized at these peaks (Extended Data Fig. ). The firing rates were obtained by convolving the binarized peaks through a Gaussian kernel with an s.d. of 500 ms. As we thresholded the traces at the beginning and binarized the peaks at the end before convolution, the resulting firing rates did not necessarily contain all of the event times, but should be considered a discrete approximation using the binarized versions of large Ca 2+ events. We chose this approximation of firing rates for its robustness against day-to-day variations in the cell baselines and/or absolute Δ F/F values. Analysis of neural responses during crossing times across days For each condition, the mean activity of each neuron during border crossing times was calculated by averaging the z -scored neural activities (firing rates for Extended Data Fig. ) within a time window beginning 2 s before and ending 2 s after the crossing event. We chose the averaged activities inside the time window instead of performing a point estimation of neural activity. This choice was motivated by our desire to mitigate the measurement errors in crossing times from the behavioural videos and to remain agnostic to the fine structure of the neural code in short time durations, as our interest is in understanding whether the neural population reacts differently, corresponding to the level of pain relief expectation during crossing and crossing back, during each phase of PAC. To determine the extent to which the neural responses of individual cells may differ between first border crossing and crossing back, we computed a discriminability index ( d ′) 2 for each neuron using the equation \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${({d}^{{\prime} })}^{2}=\frac{{\mu }_{{\rm{forward}}}-{\mu }_{{\rm{backward}}}}{{\sigma }_{{\rm{pooled}}}}\,$$\end{document} ( d ′ ) 2 = μ forward − μ backward σ pooled , where μ forward and μ backward are the mean activity of the neuron calculated during first crossing and crossing back, respectively. σ pooled is the pooled s.d. from both conditions. To perform randomized controls for all analyses, we averaged the values of interest (discriminability index, z -scored traces and/or firing rates) with randomly selected crossing times for each neuron 100 times and created a null distribution over the neural population. Classification of Purkinje cells We conducted k -means clustering analysis to categorize Purkinje cells based on their activity during the first border crossing, first crossing back and the last crossing on the post-test day. We extracted the Ca 2+ activity of each Purkinje cell during these events within the 4 s border crossing period for each condition and then concatenated them, resulting in a Ca 2+ activity trace for each Purkinje cell with a total duration of 12 s. Subsequently, we performed silhouette analysis to determine the optimal number of clusters for this dataset, which we found to be two. Accordingly, we then applied k -means clustering to classify all cross-day-aligned Purkinje cells into two clusters. Purkinje cell Ca 2+ spike frequency and amplitude analysis To find Ca 2+ events in Purkinje cells, we first applied a threshold to the raw traces using a constant value set at 0.1 times the maximum amplitude of the trace. The thresholded traces were then scanned for peaks that were at least 3 frames apart from each other and binarized at these peaks. For frequency analysis, we counted the number of Ca 2+ events in the binarized traces of each cell during border crossing and calculated their firing frequency during this 4 s border-crossing period (Extended Data Fig. ). For Ca 2+ spike amplitude analysis, we extracted the value from the z -scored Ca 2+ traces of each cell at the timepoint of detected Ca 2+ events as the amplitude. We then selected these amplitudes during the 4 s border-crossing time for comparison (Extended Data Fig. ). For Ca 2+ spike waveform analysis, we used the z -scored trace of each Purkinje cell to identify the start, peak and end points of each spike that had an amplitude larger than 3 z -scored Δ F/F (Extended Data Fig. ). Microendoscope implantation in the ACC was performed as described previously , . In brief, we stereotaxically implanted a stainless steel cannula 1 week after viral injection. The cannula was fabricated with 18-G 304 S/S Hypodermic Tubing, custom cut to pieces 4.3 mm in length (Ziggy’s Tubes and Wires) and attached at one end to a Schott Glass 2 mm in diameter and 0.1 mm thick (TTL) using an optical adhesive (Norland Optical Adhesive #81, Thermo Fisher Scientific). After grinding away the excess glass using a polisher, the cannulas were carefully stored until use in implantation surgeries. For cannula implantation surgeries, mice were anaesthetized with isoflurane (4% for induction and 2% for maintenance) while the body temperature was maintained using a heating pad. After cranial hair removal, skin sterilization and scalp incision, we performed small craniotomies in three locations (AP: +5.10 mm, −3.56 mm, −3.56 mm; ML: −0.77mm, +2.06 mm, −3.01 mm) and then screwed three stainless steel screws (MX-000120-01SF, Component Supply Company) down to the dura of the skull to stabilize the implantation. We then performed a fourth craniotomy using a drill (Model EXL-M40, Osada) and a 1.4 mm round drill burr (19007-14, FST) at the coordinates AP: +0.75 mm, ML: +1.25 mm. The bone fragments and other detritus were cleared away from the opening using sterilized forceps. To prevent any increase in intracranial pressure and improve image quality, we aspirated away the overlying tissue down to approximately DV: −1.0 mm at an angle of 18°. The custom cannula was then attached to a holder (David Kopf Instruments) and lowered to AP: +0.75 mm, ML: +1.25 mm, DV: −1.8 mm at 18°. Blood and additional debris around the craniotomy were quickly removed and adhesive cement (S380 Metabond Quick Adhesive Cement System, C&B) was applied to seal the gap between the cannula and the skull. A custom-designed laser-cut headbar (18–24 G stainless steel, LaserAlliance) was placed over the left posterior skull screw, then layers of dental cement (Lang Dental) were applied to affix both the cannula and headbar to the skull. After the cement dried (7–10 min), we transferred the animal to a heated pad for recovery. After full recovery, the mice were returned to their home cage. After cranial hair removal, skin sterilization and scalp incision, we performed small craniotomies in two locations (AP: +5.10 mm, −3.56 mm; ML: +0.77 mm, −2.89 mm) and then screwed two stainless steel screws (MX-000120-01SF, Component Supply Company) down to the dura of the skull to stabilize the implantation. We then opened an approximately 4-mm-diameter craniotomy above lobule VI of cerebellar vermis (7.0 mm posterior to bregma, 0.0 mm lateral). We first injected a virus expressing GCaMP8m into the cerebellum (DV: −360 to 200 µm). After virus injection, we gently removed the dura with fine forceps (91197-00, FST). Next, we applied Kwik-Sil (World Precision Instruments) to the border of the craniotomy. We then covered the brain with a 3-mm-diameter coverslip that we attached beneath a 5-mm-diameter coverslip before the experiment using ultraviolet-light-activated epoxy (Norland Optical Adhesive #81, Thermo Fisher Scientific). We fixed the 5-mm-diameter coverslip to the cranium with adhesive cement (S380 Metabond Quick Adhesive Cement System, C&B) and dental cement (Lang Dental). After the cement dried (7–10 min), we transferred the animal to a heated pad for recovery. After full recovery, the mice were returned to their home cage. Three weeks after the cannula implantation, we verified the GCaMP7f/8m fluorescence and Ca 2+ activity in awake mice on a custom-designed apparatus to avoid using any general anaesthetics . Mice were head-fixed by clamping (CC-1, Siskiyou) their headbar and were allowed to run on a freely rotating wheel (InnoWheel, Thermo Fisher Scientific). For imaging rACC→Pn neurons, a naked 1.0-mm-diameter gradient refractive index (GRIN) lens probe (1050-004598, Inscopix) was lowered into the implanted cannula using forceps. A miniature microscope (nVoke, Inscopix) was attached to a holder (1050-002199, Inscopix) connected to a goniometer (GN1, Thorlabs) for x – y and y – z plane tilting. The holder was connected to a three-axis micromanipulator for lowering the miniature microscope to the optimal focal plane. Image acquisition software (nVoke, Inscopix) was used to display the incoming image frames in units of relative fluorescence changes (Δ F / F ), enabling observation of Ca 2+ activity in awake, behaving mice. If we observed Ca 2+ transients, we proceeded by mounting the miniature microscope baseplate. Mice were anaesthetized with isoflurane (4% for induction and 2% for maintenance) and placed onto the stereotaxic instrument. The GRIN lens probe was fixed in place with ultraviolet-light-curable epoxy (Loctite Light-Activated Adhesive, 4305). The miniature microscope with baseplate attached was stereotaxically lowered toward the top of the GRIN lens probe or coverslips until the brain tissue was in focus. We then adjusted the orientation of the miniature microscope until it was parallel to the surface of the GRIN lens probe. The baseplate was then fixed onto the skull with dental cement. To prevent external light from contaminating the imaging field of view during recording, the outer layer was coated with black nail polish (Black Onyx NL T02, OPI). After attaching the baseplate cover (1050-004639, Inscopix), the mice were transferred to a heated pad for recovery. The mice were then returned to their home cages and then housed individually. 2+ imaging video recording, cell extraction and estimation of firing rates To perform Ca 2+ imaging in mice during PAC, we used an implanted GRIN lens for rACC→Pn neurons or a cranial window for cerebellar Purkinje cells and a miniaturized microscope (nVista, Inscopix). Miniature microscopes were mounted onto the head of the mouse before each behavioural experiment using a custom mounting station. Images were acquired using the Inscopix Data Acquisition Software (IDAS; Inscopix) at a frame rate of 20 Hz. The light-emitting diode intensity was set at 1.5 mW for imaging rACC→Pn neurons or 1 mW for imaging cerebellar Purkinje cell dendrites. A gain of 2 was used for all mice. Before performing cell extraction, we first corrected for brain motion in the videos using the TurboReg motion-correction algorithm. Later, we extracted both the spatial filters and the robust time traces for the regions of interest (ROIs) using EXTRACT , a robust cell extraction routine. We matched the resulting ROIs across days based on Tanimoto similarity, defined as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$T({\bf{x}},{\bf{y}})=\frac{{\bf{x}}\cdot {\bf{y}}}{| | \,{\bf{x}}\,| {| }^{2}+| | \,{\bf{y}}\,| {| }^{2}-{\bf{x}}\cdot {\bf{y}}}$$\end{document} T ( x , y ) = x ⋅ y ∣ ∣ x ∣ ∣ 2 + ∣ ∣ y ∣ ∣ 2 − x ⋅ y , where x and y are vectors corresponding to flattened spatial filters. We adjusted the cut-off for the Tanimoto similarity to each mouse for best results evaluated by visual inspection, then reinitialized EXTRACT with the global cell map. This process allows EXTRACT to find cells that may be missed on different days. We performed this routine of cell extraction, across-day registration and reinitialization for five iterations. We next visually inspected all videos and discarded spurious, duplicate and dendritic ROIs. Lastly, using the verified ROIs that correspond to cells, we performed a final robust regression via EXTRACT to obtain the final traces. We took the z -score of the traces, subtracting the mean and dividing by the s.d., to standardize the trace units across days. To obtain a discrete approximation of the firing rates, we first thresholded the raw traces by a constant value, chosen as 0.5× the maximum value of the trace. The thresholded traces were searched for peak points that were at least 3 frames apart from each other, and then binarized at these peaks (Extended Data Fig. ). The firing rates were obtained by convolving the binarized peaks through a Gaussian kernel with an s.d. of 500 ms. As we thresholded the traces at the beginning and binarized the peaks at the end before convolution, the resulting firing rates did not necessarily contain all of the event times, but should be considered a discrete approximation using the binarized versions of large Ca 2+ events. We chose this approximation of firing rates for its robustness against day-to-day variations in the cell baselines and/or absolute Δ F/F values. For each condition, the mean activity of each neuron during border crossing times was calculated by averaging the z -scored neural activities (firing rates for Extended Data Fig. ) within a time window beginning 2 s before and ending 2 s after the crossing event. We chose the averaged activities inside the time window instead of performing a point estimation of neural activity. This choice was motivated by our desire to mitigate the measurement errors in crossing times from the behavioural videos and to remain agnostic to the fine structure of the neural code in short time durations, as our interest is in understanding whether the neural population reacts differently, corresponding to the level of pain relief expectation during crossing and crossing back, during each phase of PAC. To determine the extent to which the neural responses of individual cells may differ between first border crossing and crossing back, we computed a discriminability index ( d ′) 2 for each neuron using the equation \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${({d}^{{\prime} })}^{2}=\frac{{\mu }_{{\rm{forward}}}-{\mu }_{{\rm{backward}}}}{{\sigma }_{{\rm{pooled}}}}\,$$\end{document} ( d ′ ) 2 = μ forward − μ backward σ pooled , where μ forward and μ backward are the mean activity of the neuron calculated during first crossing and crossing back, respectively. σ pooled is the pooled s.d. from both conditions. To perform randomized controls for all analyses, we averaged the values of interest (discriminability index, z -scored traces and/or firing rates) with randomly selected crossing times for each neuron 100 times and created a null distribution over the neural population. We conducted k -means clustering analysis to categorize Purkinje cells based on their activity during the first border crossing, first crossing back and the last crossing on the post-test day. We extracted the Ca 2+ activity of each Purkinje cell during these events within the 4 s border crossing period for each condition and then concatenated them, resulting in a Ca 2+ activity trace for each Purkinje cell with a total duration of 12 s. Subsequently, we performed silhouette analysis to determine the optimal number of clusters for this dataset, which we found to be two. Accordingly, we then applied k -means clustering to classify all cross-day-aligned Purkinje cells into two clusters. 2+ spike frequency and amplitude analysis To find Ca 2+ events in Purkinje cells, we first applied a threshold to the raw traces using a constant value set at 0.1 times the maximum amplitude of the trace. The thresholded traces were then scanned for peaks that were at least 3 frames apart from each other and binarized at these peaks. For frequency analysis, we counted the number of Ca 2+ events in the binarized traces of each cell during border crossing and calculated their firing frequency during this 4 s border-crossing period (Extended Data Fig. ). For Ca 2+ spike amplitude analysis, we extracted the value from the z -scored Ca 2+ traces of each cell at the timepoint of detected Ca 2+ events as the amplitude. We then selected these amplitudes during the 4 s border-crossing time for comparison (Extended Data Fig. ). For Ca 2+ spike waveform analysis, we used the z -scored trace of each Purkinje cell to identify the start, peak and end points of each spike that had an amplitude larger than 3 z -scored Δ F/F (Extended Data Fig. ). Tissue collection and processing Mice were transcardially perfused with phosphate-buffered saline (PBS) followed by 4% formaldehyde in PBS. Brains were then dissected, post-fixed in 4% formaldehyde, and cryoprotected in 30% sucrose. Tissues were then frozen in Optimum Cutting Temperature compound (OCT; 4583, Tissue Tek) and sectioned using a cryostat (Leica). The brains were sectioned at 40 μm and stored in PBS at 4 °C if used immediately. For longer storage, tissue sections were placed in glycerol-based cryoprotectant solution and stored at −20 °C. For in situ hybridization, tissues were sectioned at 14 μm, collected on Superfrost Plus slides (22-037-246, Thermo Fisher Scientific) and stored at −80 °C. Immunohistochemistry Tissues were incubated for 1 h and blocked in 0.1 M PBS with 0.3% Triton X-100 (Sigma-Aldrich) plus 5% normal donkey serum. Primary and secondary antibodies were diluted in 0.1 M PBS with 0.3% Triton X-100 plus 1% normal donkey serum. The sections were then incubated overnight at 4 °C in primary antibody solution, washed in 0.1 M PBS with 0.3% Triton X-100 for 40 min, incubated for 2 h in secondary antibody at room temperature and washed in 0.1 M PBS for 40 min. The sections were then mounted using Fluoromount-G (00-4958-02, Thermo Fisher Scientific). Images were acquired on the Zeiss LSM 780 confocal microscope using Zeiss Zen software, running on a Windows PC, in the UNC Neuroscience Microscopy Core. A streptavidin conjugate (Alexa Fluor 594 conjugate, Thermo Fisher Scientific; 1:1,000) was used to visualize biocytin. In situ hybridization For in situ hybridization experiments, we used Advanced Cell Diagnostics RNAscope Technology (ACD Bioscience). In brief, wild-type mice 5–8 weeks old were deeply anaesthetized with 0.1 ml of Euthasol (NDC-051311-050-01, Virbac) and perfused transcardially with 0.1 M PBS followed by 4% formaldehyde solution in PBS. Brains were dissected, cryoprotected in 30% sucrose overnight and then frozen in OCT. Frozen tissue was cut at 20 μm onto Superfrost Plus slides and stored at −80 °C. Tissue was thawed from −80 °C, washed with PBS at room temperature and subsequently processed according to the protocol provided by the manufacturer. We first pretreated the tissue with solutions from the pretreatment kit to permeabilize the tissue, then incubated with protease for 30 min and then with the hybridization probe(s) for another 2 h at 40 °C. Mice were transcardially perfused with phosphate-buffered saline (PBS) followed by 4% formaldehyde in PBS. Brains were then dissected, post-fixed in 4% formaldehyde, and cryoprotected in 30% sucrose. Tissues were then frozen in Optimum Cutting Temperature compound (OCT; 4583, Tissue Tek) and sectioned using a cryostat (Leica). The brains were sectioned at 40 μm and stored in PBS at 4 °C if used immediately. For longer storage, tissue sections were placed in glycerol-based cryoprotectant solution and stored at −20 °C. For in situ hybridization, tissues were sectioned at 14 μm, collected on Superfrost Plus slides (22-037-246, Thermo Fisher Scientific) and stored at −80 °C. Tissues were incubated for 1 h and blocked in 0.1 M PBS with 0.3% Triton X-100 (Sigma-Aldrich) plus 5% normal donkey serum. Primary and secondary antibodies were diluted in 0.1 M PBS with 0.3% Triton X-100 plus 1% normal donkey serum. The sections were then incubated overnight at 4 °C in primary antibody solution, washed in 0.1 M PBS with 0.3% Triton X-100 for 40 min, incubated for 2 h in secondary antibody at room temperature and washed in 0.1 M PBS for 40 min. The sections were then mounted using Fluoromount-G (00-4958-02, Thermo Fisher Scientific). Images were acquired on the Zeiss LSM 780 confocal microscope using Zeiss Zen software, running on a Windows PC, in the UNC Neuroscience Microscopy Core. A streptavidin conjugate (Alexa Fluor 594 conjugate, Thermo Fisher Scientific; 1:1,000) was used to visualize biocytin. For in situ hybridization experiments, we used Advanced Cell Diagnostics RNAscope Technology (ACD Bioscience). In brief, wild-type mice 5–8 weeks old were deeply anaesthetized with 0.1 ml of Euthasol (NDC-051311-050-01, Virbac) and perfused transcardially with 0.1 M PBS followed by 4% formaldehyde solution in PBS. Brains were dissected, cryoprotected in 30% sucrose overnight and then frozen in OCT. Frozen tissue was cut at 20 μm onto Superfrost Plus slides and stored at −80 °C. Tissue was thawed from −80 °C, washed with PBS at room temperature and subsequently processed according to the protocol provided by the manufacturer. We first pretreated the tissue with solutions from the pretreatment kit to permeabilize the tissue, then incubated with protease for 30 min and then with the hybridization probe(s) for another 2 h at 40 °C. To quantify the output of neurons in the rACC TRAPed after PAC, we analysed the expression of mRuby in putative presynaptic axonal terminals. First, background subtraction was performed on the mRuby channels of each image using the rolling-ball algorithm in ImageJ. Subsequently, the images were thresholded to a value 4 times the mean background intensity, converting them into binary format. The pixel densities of these binary images were then calculated and normalized to the size of the specific regions displaying mRuby expression. ACC slice preparation We began the PAC paradigm 3–4 weeks after virus injection. Mice were euthanized immediately after the conditioning phase of PAC. After decapitation, the brain was rapidly collected and immersed in ice-cold slicing solution containing 87 mM NaCl, 25 mM NaHCO 3 , 2.5 mM KCl, 1.25 mM NaH 2 PO 4 , 10 mM d -glucose, 75 mM sucrose, 0.5 mM CaCl 2 and 7 mM MgCl 2 (pH 7.4 in 95% O 2 and 5% CO 2 , 325 mOsm). Coronal brain slices 300 μm thick and containing the rACC were cut using a VT1200 vibratome (Leica Microsystems). After around 20 min incubation at 35 °C, the slices were stored at room temperature. Slices were then transferred to the chamber for electrophysiological recording. Slices were used for a maximum of 5 h after dissection. The experiments were performed at 21–24 °C. During the experiment, slices were superfused with a physiological extracellular solution containing 125 mM NaCl, 2.5 mM KCl, 25 mM NaHCO 3 , 1.25 mM NaH 2 PO 4 , 25 mM d -glucose, 2 mM CaCl 2 , and 1 mM MgCl 2 (pH 7.4 in 95% O 2 and 5% CO 2 , ~325 mOsm). Whole-cell patch recording of rACC→Pn neurons was performed as described previously . The pipettes (1B150F-4, WPI) were formed using a P-97 puller (Sutter Instruments). The resistance was 3–5 MΩ. Measuring action potential properties, spontaneous release and LTP induction The intracellular solution used for testing the action potential firing properties, spontaneous release and LTP induction of rACC→Pn neurons contained 135 mM K-gluconate, 20 mM KCl, 0.1 mM EGTA, 2 mM MgCl 2 , 2 mM Na 2 ATP, 10 mM HEPES and 0.3 mM Na 3 GTP (pH adjusted to 7.28 with KOH, ~310 mOsm); in a subset of recordings, 0.2% biocytin was added. To measure membrane properties and evoke action potential firing of rACC→Pn neurons, a 1 s step current (−50, 0, 50, 100, 150, 200, 250, 300 pA) was injected into the cell through the recording pipette. Spontaneous EPSCs were recorded while holding the rACC→Pn neurons at −70 mV. For LTP induction, biphasic electrical stimulations (5–8 V, 100 ms) were delivered by placing a borosilicate theta glass (2.0 mm, Warner Instruments) in layer II/III of the rACC. The glass was pulled using a vertical pipette puller and filled with perfusion solution. The fibre was stimulated using the DS4 Bi-Phasic Current Stimulator (Digitimer) at 0.02 Hz to measure evoked EPSCs of the rACC→Pn neurons for 6 min as the baseline. TBS (5 trains of burst with 4 pulses at 100 Hz, at 200 ms intervals, repeated 4 times at intervals of 10 s) was then administered to induce LTP. After LTP induction, evoked EPSCs were recorded for another 30 min to compare against the baseline. No blocker was used to block inhibitory synaptic inputs. Measuring AMPA/NMDA ratio, PPR and feedforward inhibition The Cs + -based intracellular solution used to measure the AMPA/NMDA ratio and PPR contained 130 mM Cs-methanesulfonate, 2 mM KCl, 10 mM EGTA, 2 mM MgCl 2 , 2 mM Na 2 ATP, 10 mM HEPES and 5 mM QX-314 (pH adjusted to 7.28 with CsOH, ~310 mOsm). To evoke synaptic response of rACC→Pn neurons, electrical stimulation (50–80 μA, 100 μs) was delivered by placing a concentric bipolar electrode (FHC) in layer II/III of the rACC. The selective GABA A receptor antagonist SR-95531 (10 μM; Sigma-Aldrich) was used to block IPSCs. To record EPSCs mediated by both AMPA and NMDA receptors, membrane potentials were held at voltages increasing from −80 mV to +60 mV. To measure the PPR, two electrical stimulations at different time intervals (20, 50, 100, 200, 500 ms) were used to evoke synaptic transmission. The membrane potential was set to −30 mV to record both EPSCs and IPSCs in the same trace (Extended Data Fig. ) or to either −70 mV or +10 mV to examine EPSCs or IPSCs in isolation (Fig. ). Measuring inhibitory input from PV + interneurons Slices from Pvalb cre mice were prepared as described above. An intracellular solution containing high chloride concentration (140 mM KCl, 10 mM EGTA, 2 mM MgCl 2 , 2 mM ATP, 10 mM HEPES and 2 mM QX-314; pH adjusted to 7.28 with KOH; 313 mOsm) was used for postsynaptic recordings of rACC→Pn neurons, which were conducted in the voltage-clamp configuration with a holding potential of −70 mV. For all voltage-clamp recordings, we applied hyperpolarizing test pulses (5 mV, 100 ms) to monitor series and input resistance throughout the entire experiment. Data from experiments in which series resistance changed more than 15% were discarded. Data acquisition and analysis Electrophysiological data were acquired using the Multiclamp 700b amplifier (Axon Instruments), low-pass filtered at 10 kHz, and sampled at 20 or 50 kHz using the Digidata 1440A low-noise digitizer (Axon Instruments). Stimulation and data acquisition were performed using Clampfit 10 software (Axon Instruments). Data were analysed using Stimfit v.0.14.9 ( https://github.com/neurodroid/stimfit ), Clampfit v.11.2 (Molecular Devices) and R v.4.0.3 (The R Project for Statistical Computing). sEPSCs were detected using a template-matching algorithm and verified by visual inspection . The location at which the peak EPSC was recorded while holding the membrane potential at −80 mV was used to measure the amplitude of AMPAR EPSCs. The amplitude of NMDAR EPSCs was measured 50 ms after the electrical stimulation. The synaptic latency of monosynaptic EPSCs or IPSCs was measured from the onset of the electrical stimulus to the onset of the EPSC or IPSC. The disynaptic IPSC delay (Fig. ) was measured from the onset of the EPSC at −70 mV to the onset of the IPSC at +10 mV. We began the PAC paradigm 3–4 weeks after virus injection. Mice were euthanized immediately after the conditioning phase of PAC. After decapitation, the brain was rapidly collected and immersed in ice-cold slicing solution containing 87 mM NaCl, 25 mM NaHCO 3 , 2.5 mM KCl, 1.25 mM NaH 2 PO 4 , 10 mM d -glucose, 75 mM sucrose, 0.5 mM CaCl 2 and 7 mM MgCl 2 (pH 7.4 in 95% O 2 and 5% CO 2 , 325 mOsm). Coronal brain slices 300 μm thick and containing the rACC were cut using a VT1200 vibratome (Leica Microsystems). After around 20 min incubation at 35 °C, the slices were stored at room temperature. Slices were then transferred to the chamber for electrophysiological recording. Slices were used for a maximum of 5 h after dissection. The experiments were performed at 21–24 °C. During the experiment, slices were superfused with a physiological extracellular solution containing 125 mM NaCl, 2.5 mM KCl, 25 mM NaHCO 3 , 1.25 mM NaH 2 PO 4 , 25 mM d -glucose, 2 mM CaCl 2 , and 1 mM MgCl 2 (pH 7.4 in 95% O 2 and 5% CO 2 , ~325 mOsm). Whole-cell patch recording of rACC→Pn neurons was performed as described previously . The pipettes (1B150F-4, WPI) were formed using a P-97 puller (Sutter Instruments). The resistance was 3–5 MΩ. The intracellular solution used for testing the action potential firing properties, spontaneous release and LTP induction of rACC→Pn neurons contained 135 mM K-gluconate, 20 mM KCl, 0.1 mM EGTA, 2 mM MgCl 2 , 2 mM Na 2 ATP, 10 mM HEPES and 0.3 mM Na 3 GTP (pH adjusted to 7.28 with KOH, ~310 mOsm); in a subset of recordings, 0.2% biocytin was added. To measure membrane properties and evoke action potential firing of rACC→Pn neurons, a 1 s step current (−50, 0, 50, 100, 150, 200, 250, 300 pA) was injected into the cell through the recording pipette. Spontaneous EPSCs were recorded while holding the rACC→Pn neurons at −70 mV. For LTP induction, biphasic electrical stimulations (5–8 V, 100 ms) were delivered by placing a borosilicate theta glass (2.0 mm, Warner Instruments) in layer II/III of the rACC. The glass was pulled using a vertical pipette puller and filled with perfusion solution. The fibre was stimulated using the DS4 Bi-Phasic Current Stimulator (Digitimer) at 0.02 Hz to measure evoked EPSCs of the rACC→Pn neurons for 6 min as the baseline. TBS (5 trains of burst with 4 pulses at 100 Hz, at 200 ms intervals, repeated 4 times at intervals of 10 s) was then administered to induce LTP. After LTP induction, evoked EPSCs were recorded for another 30 min to compare against the baseline. No blocker was used to block inhibitory synaptic inputs. The Cs + -based intracellular solution used to measure the AMPA/NMDA ratio and PPR contained 130 mM Cs-methanesulfonate, 2 mM KCl, 10 mM EGTA, 2 mM MgCl 2 , 2 mM Na 2 ATP, 10 mM HEPES and 5 mM QX-314 (pH adjusted to 7.28 with CsOH, ~310 mOsm). To evoke synaptic response of rACC→Pn neurons, electrical stimulation (50–80 μA, 100 μs) was delivered by placing a concentric bipolar electrode (FHC) in layer II/III of the rACC. The selective GABA A receptor antagonist SR-95531 (10 μM; Sigma-Aldrich) was used to block IPSCs. To record EPSCs mediated by both AMPA and NMDA receptors, membrane potentials were held at voltages increasing from −80 mV to +60 mV. To measure the PPR, two electrical stimulations at different time intervals (20, 50, 100, 200, 500 ms) were used to evoke synaptic transmission. The membrane potential was set to −30 mV to record both EPSCs and IPSCs in the same trace (Extended Data Fig. ) or to either −70 mV or +10 mV to examine EPSCs or IPSCs in isolation (Fig. ). + interneurons Slices from Pvalb cre mice were prepared as described above. An intracellular solution containing high chloride concentration (140 mM KCl, 10 mM EGTA, 2 mM MgCl 2 , 2 mM ATP, 10 mM HEPES and 2 mM QX-314; pH adjusted to 7.28 with KOH; 313 mOsm) was used for postsynaptic recordings of rACC→Pn neurons, which were conducted in the voltage-clamp configuration with a holding potential of −70 mV. For all voltage-clamp recordings, we applied hyperpolarizing test pulses (5 mV, 100 ms) to monitor series and input resistance throughout the entire experiment. Data from experiments in which series resistance changed more than 15% were discarded. Electrophysiological data were acquired using the Multiclamp 700b amplifier (Axon Instruments), low-pass filtered at 10 kHz, and sampled at 20 or 50 kHz using the Digidata 1440A low-noise digitizer (Axon Instruments). Stimulation and data acquisition were performed using Clampfit 10 software (Axon Instruments). Data were analysed using Stimfit v.0.14.9 ( https://github.com/neurodroid/stimfit ), Clampfit v.11.2 (Molecular Devices) and R v.4.0.3 (The R Project for Statistical Computing). sEPSCs were detected using a template-matching algorithm and verified by visual inspection . The location at which the peak EPSC was recorded while holding the membrane potential at −80 mV was used to measure the amplitude of AMPAR EPSCs. The amplitude of NMDAR EPSCs was measured 50 ms after the electrical stimulation. The synaptic latency of monosynaptic EPSCs or IPSCs was measured from the onset of the electrical stimulus to the onset of the EPSC or IPSC. The disynaptic IPSC delay (Fig. ) was measured from the onset of the EPSC at −70 mV to the onset of the IPSC at +10 mV. For fibreoptic cannula implantation surgeries, mice were anaesthetized with isoflurane (4% for induction and 2% for maintenance) while the body temperature was maintained using a heating pad. After cranial hair removal, skin sterilization and scalp incision, we bilaterally injected a virus encoding inhibitory or excitatory opsin into the ACC using the coordinates described above for manipulating ACC terminals in the Pn. To manipulate the activity of Oprd1 + cells in the Pn, we bilaterally injected a virus encoding an inhibitory opsin into the Pn at the coordinates AP: −4.0 mm, ML: ±0.4 mm, DV: −5.4/−5.8 mm. To manipulate the activity of Oprd1 + cells in the Pn that receive rACC inputs, we bilaterally injected the AAV1-Flpo virus into the rACC, then another virus into the Pn to express an inhibitory opsin in a Cre- and Flp-dependent manner. After virus injection, we performed small craniotomies in three locations (AP: +5.10, −1.06, −3.56 mm; ML: −0.77, +2.87, −3.13 mm). Next, to stabilize the implantation, three stainless steel screws (MX-000120-01SF, Component Supply Company) were drilled into the dura of the skull. We then performed two additional craniotomies at the coordinates AP: −4.0 mm, ML: ±1.2 mm. The cannula (CFMLC12L05, Thorlabs or RWD) was then attached to a holder (David Kopf Instruments) and lowered at 10° to the coordinates AP: +4.0 mm, ML: ±1.2 mm, DV: −4.9 mm. Blood and debris around the craniotomy were quickly removed and adhesive cement (S380 Metabond Quick Adhesive Cement System, C&B) was used to seal the gap between the cannula and skull. A custom-designed laser-cut headbar (18–24 G stainless steel, LaserAlliance) was placed over the left posterior skull screw, then layers of dental cement were applied (Lang Dental) to affix both the cannula and headbar to the skull. After the cement dried (7–10 min), we transferred the animal to a heated pad until full recovery, then to their home cage. For optogenetic photostimulation of inhibitory (eNpHR3.0) or excitatory (ChR2) opsins, ferrules were connected to a 561 nm (yellow) laser diode (MGL-FN-561, Opto Engine) or 494 nm (blue) laser diode (MBL-III-473, Opto Engine LLC) using a FC/PC adaptor and a fibreoptic rotary joint (Thorlabs). The laser output was controlled using a shutter controller (SR470, Stanford Research System), which delivered yellow light continuously for the inhibitory opsin and 4 ms blue light pulses at 20 Hz for the excitatory opsin. Light output through the optical fibres was adjusted to ~5 mW at the tip of the optical fibre for inhibition and ~10 mW at the tip of the optical fibre for excitation. For all behavioural assays described below, mice were acclimatized to the researcher and testing environment for at least 30 min before testing. PAC assay to induce and evaluate placebo analgesia The PAC apparatus consists of two adjacent and visually distinct chambers, using two separate thermal plates (BIOSEB) as the floor. PAC is a 7 day behavioural assay consisting of three phases: habituation (days 1–2) and pre-test (day 3), conditioning (days 4–6), and post-test (day 7; Fig. ). During the habituation and pre-test phases, the floors of both chambers are set at 30 °C and the mice are free to explore both compartments for 3 min; their performance on the pre-test day is compared with their performance on the post-test day. During the conditioning phase, the floor of the chamber on which the mouse begins the session (chamber 1) is set at 48 °C. Mice progressively learn that chamber 1 is painful and to associate chamber 2, which remains at 30 °C, with pain relief. On the post-test day, the floors of both chambers are set at 48 °C to evaluate any analgesic effect induced by the expectation of pain relief. The performance of mice was recorded for 3 min using a camera (acA1300, Basler) controlled by MATLAB (R2019b, MathWorks). The recorded videos were analysed using the machine-learning-based algorithm DeepLabCut or Ethovision XT15 (Noldus). We quantified and compared the latency of border crossings, time spent in each chamber and nocifensive behaviours (licking, rearing, jumping) of conditioned and unconditioned mice (Fig. ). Naloxone injection To investigate whether endogenous opioid activity is necessary for PAC-induced placebo analgesia, we injected mice with saline or naloxone (N7758, Sigma-Aldrich) intraperitoneally (5 mg per kg) during the conditioning phase (day 4 to 6; Extended Data Fig. ) or before the post-conditioning test on day 7 (Extended Data Fig. ). After injection, the mice were returned to their home cage for at least 30 min to reduce injection-induced stress. Saline-injected mice were used as controls. TRAP of rACC neurons during PAC Two weeks after virus injection, TRAP2 mice were subjected to an adjusted PAC assay (30 min conditioning phase on days 4–6 instead of 3 min) to label the rACC neurons encoding expectation of pain relief. TRAP2 mice were injected with 4-hydroxytamoxifen (50 mg per kg, subcutaneous) on the last day of the conditioning phase (day 6) immediately before conducting the PAC trial. After injection, the mice were allowed to remain in the PAC apparatus for 30 min, and then returned to their home cages. Then, 2 weeks later, we perfused the mice and dissected the brains to determine synaptophysin–mRuby expression in the rACC and other brain areas. Mice that underwent the same procedure but with both chambers set at 30 °C were used as controls. Pin prick To examine the Ca 2+ activity of rACC→Pn neurons during noxious mechanical stimulation (Extended Data Fig. ), we gently touched the plantar surface of the hindpaw with a 25 G needle 10 times at an interval of around 30 s (Extended Data Fig. ). As a control, a needle with a blunt end was used to measure the Ca 2+ activity of rACC→Pn neurons during innocuous mechanical stimulation. The entire procedure was recorded using a camera (acA1300, Basler) controlled by MATLAB (R2019b, MathWorks) and synchronized with the miniscope. Hindpaw radiant heat (Hargreaves) test To examine the Ca 2+ activity of rACC→Pn neurons during noxious thermal stimulation (Extended Data Fig. ), we used the Hargreaves test. Mice were placed in plastic chambers on a glass surface heated to 25 °C, through which a radiant heat source (Department of Anesthesiology, UC San Diego) could be focused onto the hindpaw. We recorded the performance of mice using a camera (acA1300, Basler) controlled by MATLAB (R2019b, MathWorks) and synchronized with the miniscope. Von Frey withdrawal threshold test Eight von Frey filaments (Stoelting), ranging from 0.007 to 6.0 g were used to assess mechanical withdrawal thresholds. Filaments were applied perpendicular to the ventral–medial hindpaw surface with sufficient force to cause a slight bending of the filament. A positive response was characterized by a rapid withdrawal of the paw away from the stimulus fibre within 4 s. The up–down method was used to determine the mechanical threshold (50% withdrawal threshold) . Von Frey withdraw frequency test To evaluate mechanical sensitivity, we used six von Frey filaments (0.07, 0.16, 0.4, 1.0, 1.4 and 6.0 g). Filaments were applied perpendicular to the ventral–medial hindpaw surface with sufficient force to cause a slight bending of the filament. Each filament was applied for 1 s. A positive response was characterized by a rapid and immediate withdrawal of the paw away from the filament. Each filament was applied five times. The frequency of reflexive withdrawal responses was calculated. Hotplate test Mice were acclimatized to the testing environment as described above. The plate temperature was set at 48 °C or 52 °C to measure thermal pain threshold. The mouse was placed onto the plate and the latency preceding licking and/or biting of a hindpaw was scored. To prevent tissue damage, a cut-off of 3 min or 1 min was set for the 48 °C and 52 °C plates, respectively. Formalin test An intraplantar injection (20 µl) of 2.5% formalin was performed in the left hindpaw of mice after the conditioning phase of PAC. The mouse behaviour was recorded for 30 min within the PAC apparatus or using a four-camera set-up enabling synchronized capture of each lateral angle. The time spent licking the injected hindpaw was scored using Ethovision XT15 (Noldus) or automatically scored using DeepEthogram, an unbiased, pixel-based machine learning algorithm . The PAC apparatus consists of two adjacent and visually distinct chambers, using two separate thermal plates (BIOSEB) as the floor. PAC is a 7 day behavioural assay consisting of three phases: habituation (days 1–2) and pre-test (day 3), conditioning (days 4–6), and post-test (day 7; Fig. ). During the habituation and pre-test phases, the floors of both chambers are set at 30 °C and the mice are free to explore both compartments for 3 min; their performance on the pre-test day is compared with their performance on the post-test day. During the conditioning phase, the floor of the chamber on which the mouse begins the session (chamber 1) is set at 48 °C. Mice progressively learn that chamber 1 is painful and to associate chamber 2, which remains at 30 °C, with pain relief. On the post-test day, the floors of both chambers are set at 48 °C to evaluate any analgesic effect induced by the expectation of pain relief. The performance of mice was recorded for 3 min using a camera (acA1300, Basler) controlled by MATLAB (R2019b, MathWorks). The recorded videos were analysed using the machine-learning-based algorithm DeepLabCut or Ethovision XT15 (Noldus). We quantified and compared the latency of border crossings, time spent in each chamber and nocifensive behaviours (licking, rearing, jumping) of conditioned and unconditioned mice (Fig. ). To investigate whether endogenous opioid activity is necessary for PAC-induced placebo analgesia, we injected mice with saline or naloxone (N7758, Sigma-Aldrich) intraperitoneally (5 mg per kg) during the conditioning phase (day 4 to 6; Extended Data Fig. ) or before the post-conditioning test on day 7 (Extended Data Fig. ). After injection, the mice were returned to their home cage for at least 30 min to reduce injection-induced stress. Saline-injected mice were used as controls. Two weeks after virus injection, TRAP2 mice were subjected to an adjusted PAC assay (30 min conditioning phase on days 4–6 instead of 3 min) to label the rACC neurons encoding expectation of pain relief. TRAP2 mice were injected with 4-hydroxytamoxifen (50 mg per kg, subcutaneous) on the last day of the conditioning phase (day 6) immediately before conducting the PAC trial. After injection, the mice were allowed to remain in the PAC apparatus for 30 min, and then returned to their home cages. Then, 2 weeks later, we perfused the mice and dissected the brains to determine synaptophysin–mRuby expression in the rACC and other brain areas. Mice that underwent the same procedure but with both chambers set at 30 °C were used as controls. To examine the Ca 2+ activity of rACC→Pn neurons during noxious mechanical stimulation (Extended Data Fig. ), we gently touched the plantar surface of the hindpaw with a 25 G needle 10 times at an interval of around 30 s (Extended Data Fig. ). As a control, a needle with a blunt end was used to measure the Ca 2+ activity of rACC→Pn neurons during innocuous mechanical stimulation. The entire procedure was recorded using a camera (acA1300, Basler) controlled by MATLAB (R2019b, MathWorks) and synchronized with the miniscope. To examine the Ca 2+ activity of rACC→Pn neurons during noxious thermal stimulation (Extended Data Fig. ), we used the Hargreaves test. Mice were placed in plastic chambers on a glass surface heated to 25 °C, through which a radiant heat source (Department of Anesthesiology, UC San Diego) could be focused onto the hindpaw. We recorded the performance of mice using a camera (acA1300, Basler) controlled by MATLAB (R2019b, MathWorks) and synchronized with the miniscope. Eight von Frey filaments (Stoelting), ranging from 0.007 to 6.0 g were used to assess mechanical withdrawal thresholds. Filaments were applied perpendicular to the ventral–medial hindpaw surface with sufficient force to cause a slight bending of the filament. A positive response was characterized by a rapid withdrawal of the paw away from the stimulus fibre within 4 s. The up–down method was used to determine the mechanical threshold (50% withdrawal threshold) . To evaluate mechanical sensitivity, we used six von Frey filaments (0.07, 0.16, 0.4, 1.0, 1.4 and 6.0 g). Filaments were applied perpendicular to the ventral–medial hindpaw surface with sufficient force to cause a slight bending of the filament. Each filament was applied for 1 s. A positive response was characterized by a rapid and immediate withdrawal of the paw away from the filament. Each filament was applied five times. The frequency of reflexive withdrawal responses was calculated. Mice were acclimatized to the testing environment as described above. The plate temperature was set at 48 °C or 52 °C to measure thermal pain threshold. The mouse was placed onto the plate and the latency preceding licking and/or biting of a hindpaw was scored. To prevent tissue damage, a cut-off of 3 min or 1 min was set for the 48 °C and 52 °C plates, respectively. An intraplantar injection (20 µl) of 2.5% formalin was performed in the left hindpaw of mice after the conditioning phase of PAC. The mouse behaviour was recorded for 30 min within the PAC apparatus or using a four-camera set-up enabling synchronized capture of each lateral angle. The time spent licking the injected hindpaw was scored using Ethovision XT15 (Noldus) or automatically scored using DeepEthogram, an unbiased, pixel-based machine learning algorithm . Sample preparation, library generation and sequencing For low-throughput, high-depth scRNA-seq, we used the SMART-seq v4 Ultra Low Input RNA Kit for Sequencing (SSv4; TakaraBio) as described previously . To focus our high-depth analysis on neurons, we used Snap25-IRES2-cre;Ai14 ( Snap25-tdT ) mice, which express the fluorescent reporter tdTomato in neurons. The Pn was microdissected from two 8-week-old Snap25 -tdT mice (one male and one female). The mice were anaesthetized with isoflurane and perfused with artificial cerebrospinal fluid comprising CaCl 2 (0.5 mM), glucose (25 mM), HCl (96 mM), HEPES (20 mM), MgSO 4 (10 mM), NaH 2 PO 4 (1.25 mM), myo-inositol (3 mM), N -acetylcysteine (12 mM), NMDG (96 mM), KCl (2.5 mM), NaHCO 3 (25 mM), sodium l -ascorbate (5 mM), sodium pyruvate (3 mM), taurine (0.01 mM) and thiourea (2 mM), bubbled with carbogen (95% O 2 and 5% CO 2 ). The Pn was microdissected and embedded in 2% agarose, sliced into 250 µm sections with a vibratome, then subjected to enzymatic digestion with pronase (1 mg ml −1 ) for 70 min at room temperature and triturated using fire-polished Pasteur pipettes to generate single-cell suspensions. Live single Pn neurons were isolated into eight-well strips containing SSv4 lysis buffer based on DAPI − tdTomato + using fluorescence-activated cell sorting, then stored at −80 °C. To prepare single-cell transcriptome libraries, polyadenylated RNAs were reverse transcribed into full-length cDNA and subjected to 18 PCR amplification cycles according to the SSv4 protocol. Single-cell libraries were indexed and prepared for Illumina sequencing using the Nextera XT DNA Library Preparation Kit. Multiplexed libraries were sequenced on the HiSeq 2500 sequencers to generate 100 bp paired-end reads at a depth of 2.5 million reads per cell. Single-cell FastQ files were aligned to the mm10 mouse genome (GRCm38) using STAR (v.2.7.3a) . For high-throughput, low-depth single-nucleus RNA-seq (snRNA-seq) we used the 10x Chromium 3′ V3 System (10x Genomics). We microdissected the Pn from two 8-week-old female C57BL/6J mice. Pn tissues were pooled and flash-frozen on dry ice. Single-nucleus isolation was performed as described previously . Tissue was placed into a prechilled Dounce homogenizer (Kimble) containing 500 µl chilled detergent lysis buffer (0.10% Triton X-100, 0.32 M sucrose, 10 mM HEPES (pH 8.0), 5 mM CaCl 2 , 3 mM MgAc, 0.1 mM EDTA, 1 mM dithiothreitol (DTT)). Tissue was homogenized by five strokes with the ‘loose’ pestle followed by ten strokes with the ‘tight’ pestle (Kimble). Then, 1 ml of sucrose buffer (0.32 M sucrose, 10 mM HEPES (pH 8.0), 5 mM CaCl 2 , 3 mM MgAc, 0.1 mM EDTA, 1 mM DTT) was added to the Dounce homogenizer and the combined solution was passed through a 40 μm cell strainer into a fresh tube containing 1 ml of 0.32 M sucrose buffer. An additional 1 ml of 0.32 M sucrose buffer was passed through the filter and the resulting 3.5 ml solution was centrifuged at 3,200 g for 10 min at 4 °C. The pellet was resuspended with 3 ml of 0.32 M sucrose buffer and homogenized for 30 s (Ultra-Turrax disperser, setting 1). Next, 12.5 ml of 1 M sucrose buffer (1 M sucrose, 10 mM HEPES (pH 8.0), 3 mM MgAc, 1 mM DTT) was pipetted beneath the homogenate and the tube was centrifuged at 3,200 g for 20 min at 4 °C. After decanting the supernatant, the pellet was resuspended in 1 ml of resuspension solution (0.4 mg ml −1 BSA, 0.2 U μl −1 RNase inhibitor (Lucigen) in 1× PBS), filtered through a 35 µm cell strainer and diluted to a final concentration of 225 cells per µl. Single-nucleus suspensions were loaded onto two 10x Genomics chips (Chromium v3). snRNA-seq libraries were constructed according to the protocol provided by the manufacturer. Multiplexed snRNA-seq libraries were spiked with a PhiX control library (5%) and sequenced across two NextSeq 550 high-output flow-cell runs. Raw sequencing files were aligned to the mm10 mouse genome (GRCm38) and converted to gene expression matrices using the Cell Ranger pipeline (Cell Ranger v.5.0.1, default parameters). Intronic reads were included to increase assay sensitivity. Normalization, clustering and differential gene expression scRNA-seq data were analysed using Seurat (v.4.0) . For 10x Genomics datasets, nuclei expressing fewer than 200 genes and genes expressed in fewer than 5 nuclei were removed. For SSv4 data, cells expressing fewer than 1,000 genes and genes expressed in fewer than 5 cells were removed. To focus our analysis on neurons, we performed broad preliminary clustering to define principal cell types and remove cells and nuclei that lacked expression of neuronal genes ( Snap25 and Rbfox3 ) or expressed conventional glial cell markers ( Mbp , Pdgfra , Gfap , Csf1r and Pecam1 ). The final datasets comprised 4,720 neuronal nuclei from 10x experiments (8,669 median transcripts per cell; 3,816 median genes per cell) and 212 neuronal cells from SSv4 experiments (481,098 median transcripts per cell; 9,956 median genes per cell). Each scRNA-seq dataset was normalized and transformed to a common scale separately using SCTransform with the following parameters: n cells = half the total number of cells; variable.features = median number of genes expressed per cell. The resulting datasets were integrated by SCT-Pearson residuals usinng Seurat’s FindIntegrationAnchors and IntegrateData functions using the default parameters. We determined which principal components to use in subsequent clustering analyses by manually evaluating which principal components contributed to substantial variation (ElbowPlot function in Seurat). To increase cluster robustness, the optimal nearest neighbour parameter ( k ) was identified by iterating through nearest-neighbour values (FindNeighbors function in Seurat) and calculating the average silhouette score . The k -nearest-neighbour value yielding the highest average silhouette score was used for Louvain clustering. Pairs of clusters that could not be reliably distinguished by a single gene using a binomial test ( q < 0.01; log-effect size > 2.0) were dissolved and cells reassigned to the nearest cluster based on Euclidean distance in principal component (PC) space . An initial round of clustering using this method was performed to detect principal cell types (for example, neurons, microglia, astrocytes). A subsequent round of clustering was performed on neuronal principal cell types based on enrichment of neuron-specific genes ( Snap25 , Rbfox3 ) and neurotransmitter vesicular transporters ( Slc17a6 , Slc17a7 , Slc32a1 ). Cell-type-specific marker genes were identified using a binomial test to determine which genes are expressed in cells within a given cluster compared to all other cells . The expression frequency of a given gene ( g ) expressed in a specific cell population ( N ) was compared to the expression frequency in the remaining population ( M ). Thus, the P value for this test was calculated as follows: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${p}_{g}={\sum }_{k={N}_{g}}^{N}C(N,k)\gamma \wedge (k)(1-\gamma )\wedge (N-k)$$\end{document} p g = ∑ k = N g N C ( N , k ) γ ∨ ( k ) ( 1 − γ ) ∨ ( N − k ) , where γ is the proportional frequency of cells expressing the gene of interest ( M g / M ). A complete list of cluster-specific marker genes is provided in Supplementary Table . For low-throughput, high-depth scRNA-seq, we used the SMART-seq v4 Ultra Low Input RNA Kit for Sequencing (SSv4; TakaraBio) as described previously . To focus our high-depth analysis on neurons, we used Snap25-IRES2-cre;Ai14 ( Snap25-tdT ) mice, which express the fluorescent reporter tdTomato in neurons. The Pn was microdissected from two 8-week-old Snap25 -tdT mice (one male and one female). The mice were anaesthetized with isoflurane and perfused with artificial cerebrospinal fluid comprising CaCl 2 (0.5 mM), glucose (25 mM), HCl (96 mM), HEPES (20 mM), MgSO 4 (10 mM), NaH 2 PO 4 (1.25 mM), myo-inositol (3 mM), N -acetylcysteine (12 mM), NMDG (96 mM), KCl (2.5 mM), NaHCO 3 (25 mM), sodium l -ascorbate (5 mM), sodium pyruvate (3 mM), taurine (0.01 mM) and thiourea (2 mM), bubbled with carbogen (95% O 2 and 5% CO 2 ). The Pn was microdissected and embedded in 2% agarose, sliced into 250 µm sections with a vibratome, then subjected to enzymatic digestion with pronase (1 mg ml −1 ) for 70 min at room temperature and triturated using fire-polished Pasteur pipettes to generate single-cell suspensions. Live single Pn neurons were isolated into eight-well strips containing SSv4 lysis buffer based on DAPI − tdTomato + using fluorescence-activated cell sorting, then stored at −80 °C. To prepare single-cell transcriptome libraries, polyadenylated RNAs were reverse transcribed into full-length cDNA and subjected to 18 PCR amplification cycles according to the SSv4 protocol. Single-cell libraries were indexed and prepared for Illumina sequencing using the Nextera XT DNA Library Preparation Kit. Multiplexed libraries were sequenced on the HiSeq 2500 sequencers to generate 100 bp paired-end reads at a depth of 2.5 million reads per cell. Single-cell FastQ files were aligned to the mm10 mouse genome (GRCm38) using STAR (v.2.7.3a) . For high-throughput, low-depth single-nucleus RNA-seq (snRNA-seq) we used the 10x Chromium 3′ V3 System (10x Genomics). We microdissected the Pn from two 8-week-old female C57BL/6J mice. Pn tissues were pooled and flash-frozen on dry ice. Single-nucleus isolation was performed as described previously . Tissue was placed into a prechilled Dounce homogenizer (Kimble) containing 500 µl chilled detergent lysis buffer (0.10% Triton X-100, 0.32 M sucrose, 10 mM HEPES (pH 8.0), 5 mM CaCl 2 , 3 mM MgAc, 0.1 mM EDTA, 1 mM dithiothreitol (DTT)). Tissue was homogenized by five strokes with the ‘loose’ pestle followed by ten strokes with the ‘tight’ pestle (Kimble). Then, 1 ml of sucrose buffer (0.32 M sucrose, 10 mM HEPES (pH 8.0), 5 mM CaCl 2 , 3 mM MgAc, 0.1 mM EDTA, 1 mM DTT) was added to the Dounce homogenizer and the combined solution was passed through a 40 μm cell strainer into a fresh tube containing 1 ml of 0.32 M sucrose buffer. An additional 1 ml of 0.32 M sucrose buffer was passed through the filter and the resulting 3.5 ml solution was centrifuged at 3,200 g for 10 min at 4 °C. The pellet was resuspended with 3 ml of 0.32 M sucrose buffer and homogenized for 30 s (Ultra-Turrax disperser, setting 1). Next, 12.5 ml of 1 M sucrose buffer (1 M sucrose, 10 mM HEPES (pH 8.0), 3 mM MgAc, 1 mM DTT) was pipetted beneath the homogenate and the tube was centrifuged at 3,200 g for 20 min at 4 °C. After decanting the supernatant, the pellet was resuspended in 1 ml of resuspension solution (0.4 mg ml −1 BSA, 0.2 U μl −1 RNase inhibitor (Lucigen) in 1× PBS), filtered through a 35 µm cell strainer and diluted to a final concentration of 225 cells per µl. Single-nucleus suspensions were loaded onto two 10x Genomics chips (Chromium v3). snRNA-seq libraries were constructed according to the protocol provided by the manufacturer. Multiplexed snRNA-seq libraries were spiked with a PhiX control library (5%) and sequenced across two NextSeq 550 high-output flow-cell runs. Raw sequencing files were aligned to the mm10 mouse genome (GRCm38) and converted to gene expression matrices using the Cell Ranger pipeline (Cell Ranger v.5.0.1, default parameters). Intronic reads were included to increase assay sensitivity. scRNA-seq data were analysed using Seurat (v.4.0) . For 10x Genomics datasets, nuclei expressing fewer than 200 genes and genes expressed in fewer than 5 nuclei were removed. For SSv4 data, cells expressing fewer than 1,000 genes and genes expressed in fewer than 5 cells were removed. To focus our analysis on neurons, we performed broad preliminary clustering to define principal cell types and remove cells and nuclei that lacked expression of neuronal genes ( Snap25 and Rbfox3 ) or expressed conventional glial cell markers ( Mbp , Pdgfra , Gfap , Csf1r and Pecam1 ). The final datasets comprised 4,720 neuronal nuclei from 10x experiments (8,669 median transcripts per cell; 3,816 median genes per cell) and 212 neuronal cells from SSv4 experiments (481,098 median transcripts per cell; 9,956 median genes per cell). Each scRNA-seq dataset was normalized and transformed to a common scale separately using SCTransform with the following parameters: n cells = half the total number of cells; variable.features = median number of genes expressed per cell. The resulting datasets were integrated by SCT-Pearson residuals usinng Seurat’s FindIntegrationAnchors and IntegrateData functions using the default parameters. We determined which principal components to use in subsequent clustering analyses by manually evaluating which principal components contributed to substantial variation (ElbowPlot function in Seurat). To increase cluster robustness, the optimal nearest neighbour parameter ( k ) was identified by iterating through nearest-neighbour values (FindNeighbors function in Seurat) and calculating the average silhouette score . The k -nearest-neighbour value yielding the highest average silhouette score was used for Louvain clustering. Pairs of clusters that could not be reliably distinguished by a single gene using a binomial test ( q < 0.01; log-effect size > 2.0) were dissolved and cells reassigned to the nearest cluster based on Euclidean distance in principal component (PC) space . An initial round of clustering using this method was performed to detect principal cell types (for example, neurons, microglia, astrocytes). A subsequent round of clustering was performed on neuronal principal cell types based on enrichment of neuron-specific genes ( Snap25 , Rbfox3 ) and neurotransmitter vesicular transporters ( Slc17a6 , Slc17a7 , Slc32a1 ). Cell-type-specific marker genes were identified using a binomial test to determine which genes are expressed in cells within a given cluster compared to all other cells . The expression frequency of a given gene ( g ) expressed in a specific cell population ( N ) was compared to the expression frequency in the remaining population ( M ). Thus, the P value for this test was calculated as follows: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${p}_{g}={\sum }_{k={N}_{g}}^{N}C(N,k)\gamma \wedge (k)(1-\gamma )\wedge (N-k)$$\end{document} p g = ∑ k = N g N C ( N , k ) γ ∨ ( k ) ( 1 − γ ) ∨ ( N − k ) , where γ is the proportional frequency of cells expressing the gene of interest ( M g / M ). A complete list of cluster-specific marker genes is provided in Supplementary Table . Statistical analysis was performed using R v.4.0.3 (The R Project for Statistical Computing). All values are reported as mean ± s.e.m. Statistical significance was tested using two-sided Wilcoxon rank-sum tests, two-sided Wilcoxon matched-pairs signed-rank tests or one- or two-way ANOVA with Tukey post hoc test. P < 0.05 was considered to be significant. P values between 0.05 and 0.1 were noted in the figures. In experiments involving electrical fibre stimulation, stimulation artifacts were blanked for display purposes. In Figs. and , two mice were examined in each group, and similar results were generated. In Extended Data Figs. and , three independent repeats were performed with similar results and representative images were shown. In Extended Data Figs. and , two independent repeats were performed with similar results. Further information on research design is available in the linked to this article. Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at 10.1038/s41586-024-07816-z. Supplementary Table 1 Marker genes for each Pn neuron cluster (related to Fig. 5). The spreadsheet contains 10 tabs, each corresponding to a cluster of Pn neurons classified in Fig. 5b. In each tab, genes with a log-transformed effect size greater than 0.5 are listed. Reporting Summary Peer Review File Source Data Fig. 1 Source Data Fig. 2 Source Data Fig. 3 Source Data Fig. 4 Source Data Extended Data Fig. 1 Source Data Extended Data Fig. 6 Source Data Extended Data Fig. 7 Source Data Extended Data Fig. 8 |
Relationship between the expressions of DLL3, ASC1, TTF-1 and Ki-67: First steps of precision medicine at SCLC | eefa4199-d5f6-4825-b1d3-30ce8ce53688 | 11468345 | Anatomy[mh] | Small cell lung cancer (SCLC) is an aggressive type of lung cancer that contributes to approximately 15% of lung cancer cases annually . Patients with SCLC have a poor prognosis, with a 5-year survival rate ranging from 3 to 27%, depending on the stage of the disease . SCLC is a highly proliferative lung cancer that is not amenable to surgery in most cases due to rapid growth, early spread, and a tendency to develop drug resistance and relapse . Genes and genomics/proteomic modifications related to the development, plasticity, and progression of SCLC, which could be identified as possible biomarkers for targeted therapy of this deadly disease, were already described: TP53/RB1 (98%/91%), TP73 (13%), PI3K3CA (15%), PTEN (9%), FGFR1 (8%), Hedgehog Signaling Pathway (80%), MYC (20%), KMT2D (13%), and NOTCH1 signaling (25%) . By July 19, 2022, 107 patients received Tarlatamab in dose exploration (0.003 to 100 mg; n = 73) and expansion (100 mg; n = 34) cohorts. The median progression-free and overall survival were 3.7 months (95% CI, 2.1 to 5.4) and 13.2 months (95% CI, 10.5 to not reached), respectively. Exploratory analysis suggests that selecting for increased DLL3 expression can increase clinical benefit . On May 16, 2024, the US Food and Drug Administration (FDA) granted accelerated approval to tarlatamab-dlle for extensive-stage small cell lung cancer (ES-SCLC) with disease progression on or after platinum-based chemotherapy . A phase 2 study was conducted on subjects with relapsed/refractory SCLC after two or more prior lines of treatment . Efficacy, safety, tolerability, and pharmacokinetics of Tarlatamab were evaluated in 99 patients enrolled in DeLLphi-301, an open-label, multicenter, multi-cohort study . Tarlatamab, administered at a 10-mg dose every two weeks, showed antitumor activity with durable objective responses and promising survival outcomes in patients with previously treated SCLC. No new safety signals were identified . Tarlatamab (AMG 757) is the first DLL3-targeting bispecific T-cell engager therapy that activates a patient’s T cells to attack DLL3-expressing tumor cells, which is a bispecific T-cell engager molecule that binds both DLL3 and CD3, leading to T-cell-mediated tumor lysis . DLL3 is a protein that plays a critical role in the Notch signaling pathway, which is involved in cell differentiation, proliferation, and apoptosis . In humans, DLL3 is predominantly expressed in neuroendocrine tissues. It has been aberrantly expressed on the surface of up to 80–85% of SCLC cells and minimally expressed in normal tissues, making it a compelling therapeutic target , such as in other neuroendocrine carcinomas . It is expressed both in the cytoplasm and in the membrane of SCLC cells . Despite the growing body of knowledge on the role of DLL3 in lung cancer, there remains a significant gap in our understanding of the actual expression rate of DLL3 when assessed by immunohistochemistry (IHC) in routine clinical laboratories. In a real-world study of DLL3 as an SCLC therapeutic target, positive DLL3 expression (defined as ≥25% of tumor cells) was identified in 895/1050 (85%) patients with one specimen and evaluable DLL3 expression; 719/1050 (68%) patients had high DLL3 expression (defined as ≥75% of tumor cells). There was no significant difference in median overall survival from SCLC diagnosis for evaluable patients with non-missing data based on DLL3 expression (negative DLL3 expression ( n = 139), 9.5 months; positive DLL3 expression ( n = 747), 9.5 months; all evaluable patients ( n = 893, 9.5 months) . With the advent of anti-DLL3 therapies, studies of interrelationships between different molecules still need to be included, such as thyroid transcription factor-1 (TTF-1), which is involved in the differentiation of lung epithelial cells and is commonly expressed in high-grade lung and neuroendocrine adenocarcinomas, or Ki-67 protein (MKI67) which is a cellular marker for proliferation, found in the nucleus of cancer cells that are actively growing and dividing . These relationships could provide insights into the tumor biology of SCLC and rare tumors such as the Large-Cell Neuroendocrine Carcinomas (LCNEC), representing 1–3% of all primary lung cancers, and potentially guide treatment decisions and prognostication in a clinical setting . In this study, the qualitative and quantitative protein expression of DLL3, ASCL1, TTF-1, and Ki-67 was retrospectively analyzed by digital pathology in patients with SCLC, and this expression was linked to median overall survival using a multivariate mathematical model. Patients’ characteristics Sixty-four cases were included (mean age 71 ± 10), with a balanced relation between gender (32 females and 32 males, ). The mean age for males was 72 ± 10 years, and for females, 70 ± 10 years ( p = 0.460). Most patients were older than 60 (54 patients, 84,4%), as depicted in the population pyramid . The majority of cases were biopsied from lung parenchyma, either by transbronchial/endobronchial biopsies or transthoracic CT-guided procurement (56 cases, 90,3%). Four cases were pleural biopsies, and two were metastasis in lymph nodes. Chromogranin was positive in 70,3% of cases, with 15,4% showing 1+ intensity, 19,2% 2+ intensity, and 23,1% 3+ intensity. Synaptophysin was positive in 83,8% of cases, with 24,0% showing 1+ intensity, 20,0% 2+ intensity, and 32,0% 3+ intensity. CD56 was positive in 94,4% of cases, and its intensity was not evaluated . All cases had at least one classical neuroendocrine marker positive and conventional small-cell carcinoma morphology. Fifteen patients (18%) were followed by palliative care and did not receive chemotherapy. All remaining patients included in the study received standard chemotherapy for small-cell neuroendocrine carcinoma. The follow-up was complete until the patients died from the disease. The mean overall survival was 77.5 days with a 95% confidence interval of 36 to 116 days , with a maximum of 557 days. TTF-1 expression While TTF-1 is not usually considered a conventional marker for diagnosing small cell carcinoma in most centers, it is positive in most of them. In the current cohort, it was positive in 33 cases (52%) and negative in 31 cases (48%) . The percentage of tumor cells with TTF-1 averaged 39.6% (SD 43.4). Eleven (11, 18.3%) had 100% of TTF-1 positivity. When assigned a histologic score of percentage versus intensity of positivity, cases had an H-score median of 37,30 (SD 110,08). Twenty-one cases (21, 33%) had an H-score of 150 or higher . Ki67 expression Ki67 was positively expressed in all cases diagnosed with small cell carcinoma due to its high proliferation rate . In the cohort, Ki67 showed positive expression in 100% of the cases, with an average percentage of positive cells of 73.73% (SD: 15.80). The case with the highest expression exhibited an immunohistochemical positivity of 97.20%, while the case with the lowest expression showed positivity in 40% of the cells . ASCL1 expression Tissue was available for the study of ASCL1 in 64 cases . The H-score had a median of 57,08 (SD 54.55). Only two cases (3%) were completely negative for this antibody, while the majority (55 cases, 86%) had an H-score of 10–150 and were considered low-expressors. Seven cases (7, 11%) were considered high expression. Only one case (1.4%) had an H-score of more than 250 . DLL3 expression DLL3-positive SCLC tissue was used as a positive control, and DLL3-negative lung adenocarcinoma tissue was used as a negative control. As per previously published data , the staining pattern was cytoplasmic and membranous . Forty-six (46, 72%) had some expression of DLL3 (18 negative, 28%). Nineteen cases (30%) expressed DLL3 in less than 50% of tumor cells, while 27 (42%) expressed it in more than 50% of cells. When the h-score was calculated, only five cases (8%) scored above 150 . Association between DLL3, ASC1, TTF-1 and Ki-67 immunoexpression Both TTF-1 and DLL3 were evaluated by the percentage of positive cells and H-score. ASCL1 was evaluated by H-score. As expected, ASCL1 expression was strongly associated with synaptophysin positivity ( p = 0,003) ASCL1 expression did not have any differences regarding age, Ki-67 positivity, chromogranin or TTF-1 expression . DLL3 expression was strongly associated with TTF-1 positivity ( , and ). Tumors that were positive for TTF-1 had a higher percentage of DLL-3 expression both in percentage as well as in H-score ( p < 0.001). The correlation between biomarkers TTF-1 and DLL3 was positive demonstrated in . Survival and multivariate analyses The mean global survival of all patients included in the study was 77.5 days . Age, sex, and all conventional neuroendocrine markers did not correlate with overall survival. Using Cox regression, epidemiological variables, as well as TTF-1 and DLL3 expression were tested. It was observed that TTF1 negative patients are a marker of worse prognosis in patients with SCLC compared to patients with positive expression ( p = 0.014) . DLL3 and ASCL1 did not have any correlation with overall survival . Sixty-four cases were included (mean age 71 ± 10), with a balanced relation between gender (32 females and 32 males, ). The mean age for males was 72 ± 10 years, and for females, 70 ± 10 years ( p = 0.460). Most patients were older than 60 (54 patients, 84,4%), as depicted in the population pyramid . The majority of cases were biopsied from lung parenchyma, either by transbronchial/endobronchial biopsies or transthoracic CT-guided procurement (56 cases, 90,3%). Four cases were pleural biopsies, and two were metastasis in lymph nodes. Chromogranin was positive in 70,3% of cases, with 15,4% showing 1+ intensity, 19,2% 2+ intensity, and 23,1% 3+ intensity. Synaptophysin was positive in 83,8% of cases, with 24,0% showing 1+ intensity, 20,0% 2+ intensity, and 32,0% 3+ intensity. CD56 was positive in 94,4% of cases, and its intensity was not evaluated . All cases had at least one classical neuroendocrine marker positive and conventional small-cell carcinoma morphology. Fifteen patients (18%) were followed by palliative care and did not receive chemotherapy. All remaining patients included in the study received standard chemotherapy for small-cell neuroendocrine carcinoma. The follow-up was complete until the patients died from the disease. The mean overall survival was 77.5 days with a 95% confidence interval of 36 to 116 days , with a maximum of 557 days. While TTF-1 is not usually considered a conventional marker for diagnosing small cell carcinoma in most centers, it is positive in most of them. In the current cohort, it was positive in 33 cases (52%) and negative in 31 cases (48%) . The percentage of tumor cells with TTF-1 averaged 39.6% (SD 43.4). Eleven (11, 18.3%) had 100% of TTF-1 positivity. When assigned a histologic score of percentage versus intensity of positivity, cases had an H-score median of 37,30 (SD 110,08). Twenty-one cases (21, 33%) had an H-score of 150 or higher . Ki67 was positively expressed in all cases diagnosed with small cell carcinoma due to its high proliferation rate . In the cohort, Ki67 showed positive expression in 100% of the cases, with an average percentage of positive cells of 73.73% (SD: 15.80). The case with the highest expression exhibited an immunohistochemical positivity of 97.20%, while the case with the lowest expression showed positivity in 40% of the cells . Tissue was available for the study of ASCL1 in 64 cases . The H-score had a median of 57,08 (SD 54.55). Only two cases (3%) were completely negative for this antibody, while the majority (55 cases, 86%) had an H-score of 10–150 and were considered low-expressors. Seven cases (7, 11%) were considered high expression. Only one case (1.4%) had an H-score of more than 250 . DLL3-positive SCLC tissue was used as a positive control, and DLL3-negative lung adenocarcinoma tissue was used as a negative control. As per previously published data , the staining pattern was cytoplasmic and membranous . Forty-six (46, 72%) had some expression of DLL3 (18 negative, 28%). Nineteen cases (30%) expressed DLL3 in less than 50% of tumor cells, while 27 (42%) expressed it in more than 50% of cells. When the h-score was calculated, only five cases (8%) scored above 150 . Both TTF-1 and DLL3 were evaluated by the percentage of positive cells and H-score. ASCL1 was evaluated by H-score. As expected, ASCL1 expression was strongly associated with synaptophysin positivity ( p = 0,003) ASCL1 expression did not have any differences regarding age, Ki-67 positivity, chromogranin or TTF-1 expression . DLL3 expression was strongly associated with TTF-1 positivity ( , and ). Tumors that were positive for TTF-1 had a higher percentage of DLL-3 expression both in percentage as well as in H-score ( p < 0.001). The correlation between biomarkers TTF-1 and DLL3 was positive demonstrated in . The mean global survival of all patients included in the study was 77.5 days . Age, sex, and all conventional neuroendocrine markers did not correlate with overall survival. Using Cox regression, epidemiological variables, as well as TTF-1 and DLL3 expression were tested. It was observed that TTF1 negative patients are a marker of worse prognosis in patients with SCLC compared to patients with positive expression ( p = 0.014) . DLL3 and ASCL1 did not have any correlation with overall survival . Precision medicine is an innovative approach to disease prevention and treatment that considers differences in people’s genes, injuries, environments, and lifestyles to target the right therapies to the right patients at the right time. In oncology, precision medicine uses genetic and molecular information, tailoring treatment on a single patient profile, optimizing efficacy, and minimizing toxicities . This approach is revolutionizing lung cancer diagnosis and treatment. However, despite being widely adopted, its benefit in clinical practice still remains to be fully elucidated . SCLC continues to carry a poor prognosis, with a five-year survival rate of 3.5% and a 10-year survival rate of 1.8% . The pathogenesis remains unclear, and no known predictive or diagnostic biomarkers exist. Delta-like ligand 3 (DLL3) is an inhibitory Notch ligand that is highly expressed in small cell lung cancer (SCLC) and has been identified as a potential therapeutic target . DLL3 expression is not commonly found in normal adult tissues, which makes it an attractive target for anti-cancer therapies . High DLL3 expression has been associated with poor prognosis in SCLC patients, suggesting its potential role as a prognostic biomarker . However, the prognostic significance of DLL3 expression in SCLC remains controversial, with some conflicting studies indicating a potential association between high DLL3 expression and overall survival . Therapeutic strategies targeting DLL3, such as antibody-drug conjugates (ADCs), bispecific T-cell engagers, and chimeric antigen receptor (CAR) T-cell therapies, are under development . Rovalpituzumab tesirine (Rova-T), an ADC targeting DLL3, has been evaluated in clinical trials, although it did not meet the expected outcomes in Phase III trials . Other investigational therapies, including bispecific T-cell engagers like tarlatamab (AMG 757) and CAR T-cell therapies targeting DLL3, have shown promise in preclinical models and early clinical trials . The study conducted by Furuta et al. provides critical insights into the expression of these proteins in surgically resected SCLC samples . The study reveals a high prevalence of DLL3 and ASCL1 expression in SCLC patients, with ASCL1 expression detected in 83% of the evaluated samples. These findings agree with our paper, which showed 90% positivity of ASCL1. This high expression rate aligns with DLL3’s potential role in the disease’s pathology and supports the development of DLL3-targeted therapies. The positive correlation between DLL3 and ASCL1 expressions further underscores their interconnected roles in SCLC’s molecular landscape, suggesting that interventions targeting these pathways could offer new avenues for treatment . Their study also explores the prognostic implications of DLL3 and ASCL1 expression, finding no direct association with patient survival. Similarly to their findings, in out cohort we have not found any direct association of ASCL1 and DLL3 with the overall survival, although we found a relation between positive TTF1 and a better survival rate (quantified by percentage of positive cells). These findings may be important in establishing practical protocols for scoring these immunohistochemical studies and selecting patients that may benefit from targeted therapies. Similarly, another recent study demonstrated that high DLL3 and ASCL1 expression was associated with certain morphological features in LCNECs and SCLCs, and in early-stage patients without metastasis who underwent chemotherapy, high expression of both DLL3 and ASCL1 was linked to a better prognosis and a lower risk of death . Furthermore, DLL3 expression in LCNEC was associated with the expression of ASCL1 and neuroendocrine markers, suggesting a relationship between DLL3 expression and the neuroendocrine profile of these tumors . These findings suggest that DLL3 and ASCL1 are not only correlated in their expression but may also be involved in the neuroendocrine phenotype of lung neuroendocrine tumors and could serve as potential therapeutic targets or prognostic indicators in these diseases. Specifically, ASCL1-positive/DLL3-high tumors may represent a subgroup of SCLC with unique vulnerabilities to DLL3-targeted therapies. Further research is warranted to validate these findings and explore the clinical utility of ASCL1/DLL3 co-expression as a predictive biomarker for therapeutic response. In adenocarcinomas, TTF-1 has been shown to play a significant role in the pathogenesis of lung cancer, being expressed in 69–80% of lung adenocarcinoma cases. Clinically, TTF-1 expression is a diagnostic tool for identifying the histological type of lung cancer, distinguishing primary lung adenocarcinomas from metastatic forms, and acting as a prognostic indicator. Studies have shown that patients with positive TTF-1 expression exhibit longer overall survival (OS) in stage I lung adenocarcinoma . Small Cell Lung Cancer (SCLC), typically characterized as an undifferentiated cancer, exhibits TTF-1 positivity in 80–90% of cases, indicating a function beyond epithelial cell differentiation. Evidence of TTF-1 expression in non-pulmonary small cell cancers, such as aggressive small cell prostate cancer, supports its association with neuroendocrine differentiation and aggressive tumor behavior rather than characteristics of terminal respiratory unit cells . In our samples, of interest, was the association of TTF-1 score with DLL3 expression, showing a potential role in TTF-1 as a differentiation and mechanistic marker, much more than only a diagnostic one. The significant prevalence of DLL3 and ASCL1 expression in early-stage SCLC, as highlighted by Furuta et al. and corroborated by our findings, underscores their potential as therapeutic targets and prognostic biomarkers . Our study further expands upon this, revealing a correlation between TTF1 positive expression and improved survival outcomes, emphasizing the importance of standardized scoring protocols for these immunohistochemical markers. This may enable the identification of patient subgroups that could particularly benefit from DLL3-targeted therapies, potentially personalizing treatment approaches for SCLC. Additionally, the intriguing association between TTF-1 expression and DLL3, as observed in our study, suggests a multifaceted role for TTF-1 beyond its established diagnostic utility. This finding may have implications for understanding the molecular underpinnings of SCLC and could inform the development of novel therapeutic strategies. Further investigations into the mechanistic link between TTF-1 and DLL3 could uncover new avenues for intervention in this aggressive disease. Despite the promising insights and potential therapeutic implications highlighted in our study, there are several limitations that should be acknowledged. First, our study’s retrospective design may introduce selection bias, as it relies on previously collected data and samples, which may not be representative of the broader SCLC patient population. Additionally, the relatively small sample size limits the generalizability of our findings and may impact the statistical power to detect significant associations or differences in survival outcomes. Furthermore, our study primarily focuses on the expression of DLL3 and ASCL1 in small SCLC samples, which may not fully capture the heterogeneity of SCLC, especially in that most cases are inoperable or treated with different modalities. The lack of longitudinal data to track changes in marker expression over time and in response to treatment is another limitation. Finally, the interpretation of immunohistochemical scoring can be subjective, and inter-observer variability might affect the consistency of the results, even with the attempted scoring protocols tried here. Future studies should aim to include larger, more diverse cohorts and incorporate prospective designs to validate these findings and enhance their clinical applicability. In summary, our findings and corroborative studies present a compelling case for the significance of TTF1 in the clinical landscape of small-cell lung cancer. The evidence of a better survival rate in patients with high expression of these proteins, despite the generally poor prognosis associated with SCLC, indicates their potential utility as biomarkers and as focal points for targeted therapy. Future research should continue to explore the mechanistic pathways influenced by these proteins, emphasizing developing therapeutic strategies that can effectively exploit these targets. By advancing our understanding of DLL3 and ASCL1 within the broader context of lung cancer pathology, we can hope to refine diagnostic criteria and enhance the specificity and efficacy of treatment protocols, ultimately leading to improved survival rates and quality of life for patients afflicted by this formidable disease. Cohort description This observational, cross-sectional, and analytical study had a cohort of sixty-four sequential patients recruited between May 2018 and November 2022. Biopsies were analyzed in a reference thoracic pathology laboratory. Data were collected from electronic medical records in the respective hospital units where each patient was diagnosed and followed up. Inclusion criteria were defined as adults over 18 years of age with transbronchial biopsy of a primary SCLC tumor confirmed by histological analysis, sufficient material for the study of HE, DLL3, ASCL1, TTF-1, and Ki-67, and clinical follow-up to death. Exclusion Criteria were under 18, insufficient material for IHC analysis, lack of clinical data, or loss of clinical follow-up. This protocol was reviewed and approved by the Research Ethics Committee at the Federal University of Ceará (Protocol CAAE 59399322.9.0000.5049). The study was conducted under the Good Clinical Practice Guidelines and the Helsinki Declaration. Immunohistochemistry Each tumor formalin-fixed, and paraffin-embedded tissue block was sectioned at 2 µm. A hematoxylin and eosin (HE) staining was performed. Slides were stained with anti-DLL3-specific monoclonal antibody (dilution 1:100; clone EPR22592-18; cat. no. ab229902; Abcam, Cambridge, UK); anti-ASCL1 polyclonal antibody (dilution 1:200; cat. no. PA5-77868; Invitrogen, Massachusetts, USA); anti-TTF-1 specific monoclonal antibody (prediluted; clone 8G7G3/1; cat. no. 790-4398; Ventana Medical Systems, Inc.); and anti-Ki-67 monoclonal specific antibody (prediluted; clone 30-9; cat. no. 790-4286; Ventana Medical Systems, Inc.). We used the Ultraview DAB IHC Detection Kit (cat. no. 760–500; Ventana Medical Systems, Inc.), which includes a blocking reagent and a secondary antibody conjugated with polymer. Staining was performed using standard automated immunostaining equipment (Ultraview Benchmark Ventana; Ventana Medical Systems, Inc., Tucson, AZ, USA) according to the manufacturer’s protocol. Chromogranin, synaptophysin and Ki-67 had been previously performed for the diagnosis, and retrieved from the pathology files. IHC slides had a positive control tissue: glioblastoma for DLL3, neuroendocrine tumor for ASCL1, thyroid tissue for TTF-1, and tonsil tissue for Ki-67. Positive and negative control slides were included in each assay. The slides were analyzed by optical microscopy to evaluate the positive and negative controls. Digital pathology analysis ASCL1, DLL3, TTF-1 and Ki-67 s lides were scanned using the KFBIO scanner equipment at 40x magnification. The SVS files were then imported to QuPath ® software v. 0.5.0 as “DAB Brightfield,” which allowed sample analysis. The files were loaded onto a project in QuPath software (QuPath source code, documentation, and links to the software download are available at https://qupath.github.io ). QuPath’s segmentation feature can detect thousands of cells, identify them as objects in a hierarchical manner below the annotation or cases, and measure cell morphology and biomarker expression simultaneously (12). QuPath has recently been used as annotation software in deep learning to distinguish small-cell from large-cell neuroendocrine lung cancer . For each slide the stain vectors were recalibrated on “Estimate Stain Vector” with automatic calibration. The positive cell detection was performed by the nucleus evaluation according to default parameters; the nucleus staining intensity threshold was set as 0.1, and the cell expansion was set to default to 5 micrometers, which is the default measurement for cell cytoplasm expansion from the nucleus until it meets the neighboring cell. The DAB intensity threshold was standardized according to each marker. For DLL3, the “thresholdCompartmen” was set to be “Cytoplasm: DAB OD Mean,” and for ASCL1, Ki-67, and TTF-1 the “thresholdCompartmen” was set to be “Nucleus: DAB OD mean.” For H-Score analysis, the intensity threshold parameters were set with three threshold points: the “thresholdPositive1” was set to 0.2, the “thresholdPositive2” was set to 0.4, and the “thresholdPositive3” was set to 0.6. The analysis was performed for each marker and the results were obtained as positive and negative, percentage and HScore. depicts an example of DLL3 expression in a tumor showing the deployment of QuPath algorithm to assess cells with zero, low, moderate and high expressions, which is color coded and curated by an experienced pathologist. Snapshots of representative images were exported to ImageJ for storage and illustrations ( and ), exported in high quality using TIFF extensions with 300 dpi and at least 5 inches in the shortest axis. Scoring criteria biomarkers For DLL3, ASCL1, and TTF-1, IHC scoring was performed in two ways. First, the staining was semi-quantitatively evaluated using an immunohistochemical H-score (HS) method by an experienced thoracic pathologist and also by using a algorithm developed and of free access by QuPath . The H-score method was applied based on the extent and intensity of cytoplasmic staining (1, 2, or 3) multiplied by the percentage of cells positive (proportion score), with a potential score ranging from 0 to 300. The H-score is a classic semi-quantitative method used in pathology to assess the intensity and distribution of immunohistochemical staining in tissue samples. It is particularly valuable in research for evaluating the expression levels of various proteins within specific cells or tissue regions, which can be crucial for diagnosing and determining the prognosis of diseases, especially cancer. It has been used in several organ systems and cancer types, including oral squamous cancer, kidney cancer, breast cancer and lung cancer . Over the past decade, several studies have developed automated algorithms for the quantitative assessment of IHC images. However, significant efforts are still needed to improve quantification accuracy and efficiency . More recently, several articles have automated the use of H-scoring to increase accuracy and reproducibility, using the QuPath software, as in the current study . The second way was the analysis of the percentage of positive cells (0–100%). The cut-off of negative and positive, low and high, was according to each protein expression profile and was used as described in previous studies . DLL3 and TTF-1 were considered positive if at least 1% of tumor cells had cytoplasmic and/or membranous on DLL3 and nuclear staining on TTF-1. Both proteins were considered low expression if positive in less than 50% of tumor cells, while high expression was assumed if the protein was positive in more than 50% of tumor cells. ASCL1 was considered positive if at least 10% of tumor cells had nuclear staining. ASCL1 – H-score patients ≤10 were considered negative, H-scores of 11–149 were considered low expressed, and 150–300 were considered high expressed. Chromogranin and synaptophysin were considered positive if at least 5% of tumor cells had cytoplasmic and/or membranous staining. In addition, a semi-quantitative scoring of 1, 2, and 3 intensity of staining was estimated by at least one pathologist. CD56 staining was considered only as positive when shown a membranous staining, or negative . The most recent 2021 WHO classification identifies the three markers indicative of neuroendocrine (NE) differentiation: chromogranin A, synaptophysin, and CD56. In addition, it mentions INSM1 as a potential new marker . Determining positivity for these markers lacks defined thresholds, necessitating consideration of morphological features. Chromogranin and synaptophysin are genuine indicators of NE differentiation, as they bind to epitopes present in neurosecretory granules or synaptic vesicles. In SCLC, focal positivity for chromogranin A in some tumor cells is diagnosed . Statistical analysis Univariate descriptive statistics were performed on the recollected data. Normal variables were reported by their mean and standard deviation, and non-normal counterparts by median and interquartile range; count data were reported by absolute frequency and percentage. Overall survival analysis included univariate Kaplan-Meier curves using different biomarker strata according to DLL3, ASCL1, and TTF-1 presence, expression levels, and gender. Multivariate analysis included a correlation plot over the numerical variables and Cox regression analysis using a backstep variable selection strategy. This observational, cross-sectional, and analytical study had a cohort of sixty-four sequential patients recruited between May 2018 and November 2022. Biopsies were analyzed in a reference thoracic pathology laboratory. Data were collected from electronic medical records in the respective hospital units where each patient was diagnosed and followed up. Inclusion criteria were defined as adults over 18 years of age with transbronchial biopsy of a primary SCLC tumor confirmed by histological analysis, sufficient material for the study of HE, DLL3, ASCL1, TTF-1, and Ki-67, and clinical follow-up to death. Exclusion Criteria were under 18, insufficient material for IHC analysis, lack of clinical data, or loss of clinical follow-up. This protocol was reviewed and approved by the Research Ethics Committee at the Federal University of Ceará (Protocol CAAE 59399322.9.0000.5049). The study was conducted under the Good Clinical Practice Guidelines and the Helsinki Declaration. Each tumor formalin-fixed, and paraffin-embedded tissue block was sectioned at 2 µm. A hematoxylin and eosin (HE) staining was performed. Slides were stained with anti-DLL3-specific monoclonal antibody (dilution 1:100; clone EPR22592-18; cat. no. ab229902; Abcam, Cambridge, UK); anti-ASCL1 polyclonal antibody (dilution 1:200; cat. no. PA5-77868; Invitrogen, Massachusetts, USA); anti-TTF-1 specific monoclonal antibody (prediluted; clone 8G7G3/1; cat. no. 790-4398; Ventana Medical Systems, Inc.); and anti-Ki-67 monoclonal specific antibody (prediluted; clone 30-9; cat. no. 790-4286; Ventana Medical Systems, Inc.). We used the Ultraview DAB IHC Detection Kit (cat. no. 760–500; Ventana Medical Systems, Inc.), which includes a blocking reagent and a secondary antibody conjugated with polymer. Staining was performed using standard automated immunostaining equipment (Ultraview Benchmark Ventana; Ventana Medical Systems, Inc., Tucson, AZ, USA) according to the manufacturer’s protocol. Chromogranin, synaptophysin and Ki-67 had been previously performed for the diagnosis, and retrieved from the pathology files. IHC slides had a positive control tissue: glioblastoma for DLL3, neuroendocrine tumor for ASCL1, thyroid tissue for TTF-1, and tonsil tissue for Ki-67. Positive and negative control slides were included in each assay. The slides were analyzed by optical microscopy to evaluate the positive and negative controls. ASCL1, DLL3, TTF-1 and Ki-67 s lides were scanned using the KFBIO scanner equipment at 40x magnification. The SVS files were then imported to QuPath ® software v. 0.5.0 as “DAB Brightfield,” which allowed sample analysis. The files were loaded onto a project in QuPath software (QuPath source code, documentation, and links to the software download are available at https://qupath.github.io ). QuPath’s segmentation feature can detect thousands of cells, identify them as objects in a hierarchical manner below the annotation or cases, and measure cell morphology and biomarker expression simultaneously (12). QuPath has recently been used as annotation software in deep learning to distinguish small-cell from large-cell neuroendocrine lung cancer . For each slide the stain vectors were recalibrated on “Estimate Stain Vector” with automatic calibration. The positive cell detection was performed by the nucleus evaluation according to default parameters; the nucleus staining intensity threshold was set as 0.1, and the cell expansion was set to default to 5 micrometers, which is the default measurement for cell cytoplasm expansion from the nucleus until it meets the neighboring cell. The DAB intensity threshold was standardized according to each marker. For DLL3, the “thresholdCompartmen” was set to be “Cytoplasm: DAB OD Mean,” and for ASCL1, Ki-67, and TTF-1 the “thresholdCompartmen” was set to be “Nucleus: DAB OD mean.” For H-Score analysis, the intensity threshold parameters were set with three threshold points: the “thresholdPositive1” was set to 0.2, the “thresholdPositive2” was set to 0.4, and the “thresholdPositive3” was set to 0.6. The analysis was performed for each marker and the results were obtained as positive and negative, percentage and HScore. depicts an example of DLL3 expression in a tumor showing the deployment of QuPath algorithm to assess cells with zero, low, moderate and high expressions, which is color coded and curated by an experienced pathologist. Snapshots of representative images were exported to ImageJ for storage and illustrations ( and ), exported in high quality using TIFF extensions with 300 dpi and at least 5 inches in the shortest axis. For DLL3, ASCL1, and TTF-1, IHC scoring was performed in two ways. First, the staining was semi-quantitatively evaluated using an immunohistochemical H-score (HS) method by an experienced thoracic pathologist and also by using a algorithm developed and of free access by QuPath . The H-score method was applied based on the extent and intensity of cytoplasmic staining (1, 2, or 3) multiplied by the percentage of cells positive (proportion score), with a potential score ranging from 0 to 300. The H-score is a classic semi-quantitative method used in pathology to assess the intensity and distribution of immunohistochemical staining in tissue samples. It is particularly valuable in research for evaluating the expression levels of various proteins within specific cells or tissue regions, which can be crucial for diagnosing and determining the prognosis of diseases, especially cancer. It has been used in several organ systems and cancer types, including oral squamous cancer, kidney cancer, breast cancer and lung cancer . Over the past decade, several studies have developed automated algorithms for the quantitative assessment of IHC images. However, significant efforts are still needed to improve quantification accuracy and efficiency . More recently, several articles have automated the use of H-scoring to increase accuracy and reproducibility, using the QuPath software, as in the current study . The second way was the analysis of the percentage of positive cells (0–100%). The cut-off of negative and positive, low and high, was according to each protein expression profile and was used as described in previous studies . DLL3 and TTF-1 were considered positive if at least 1% of tumor cells had cytoplasmic and/or membranous on DLL3 and nuclear staining on TTF-1. Both proteins were considered low expression if positive in less than 50% of tumor cells, while high expression was assumed if the protein was positive in more than 50% of tumor cells. ASCL1 was considered positive if at least 10% of tumor cells had nuclear staining. ASCL1 – H-score patients ≤10 were considered negative, H-scores of 11–149 were considered low expressed, and 150–300 were considered high expressed. Chromogranin and synaptophysin were considered positive if at least 5% of tumor cells had cytoplasmic and/or membranous staining. In addition, a semi-quantitative scoring of 1, 2, and 3 intensity of staining was estimated by at least one pathologist. CD56 staining was considered only as positive when shown a membranous staining, or negative . The most recent 2021 WHO classification identifies the three markers indicative of neuroendocrine (NE) differentiation: chromogranin A, synaptophysin, and CD56. In addition, it mentions INSM1 as a potential new marker . Determining positivity for these markers lacks defined thresholds, necessitating consideration of morphological features. Chromogranin and synaptophysin are genuine indicators of NE differentiation, as they bind to epitopes present in neurosecretory granules or synaptic vesicles. In SCLC, focal positivity for chromogranin A in some tumor cells is diagnosed . Univariate descriptive statistics were performed on the recollected data. Normal variables were reported by their mean and standard deviation, and non-normal counterparts by median and interquartile range; count data were reported by absolute frequency and percentage. Overall survival analysis included univariate Kaplan-Meier curves using different biomarker strata according to DLL3, ASCL1, and TTF-1 presence, expression levels, and gender. Multivariate analysis included a correlation plot over the numerical variables and Cox regression analysis using a backstep variable selection strategy. |
Exploring student perceptions on virtual reality in anatomy education: insights on enjoyment, effectiveness, and preferences | 08601c6a-6ed3-4ba2-8580-f8c10e2eb893 | 11613789 | Anatomy[mh] | Anatomy education plays a fundamental role in the global medical school curriculum, serving as the cornerstone for building a robust preclinical knowledge base vital for future physicians. A deep understanding of human anatomy is crucial for performing successful physical examinations, interpreting clinical symptoms, conducting surgeries, and undertaking a wide range of medical interventions . Historically, the teaching of anatomical knowledge has predominantly revolved around cadaveric dissection. This traditional approach excels in its ability to unravel the intricacies of large organs, present three-dimensional bodily structures, showcase the spectrum of normal anatomical variations, and provide insights into clinically relevant aspects . Advocates of traditional dissection passionately affirm its indispensability, emphasizing the irreplaceable value of hands-on learning . However, it is imperative to acknowledge that this conventional method is not without limitations. It falls short in effectively imparting certain complex anatomical concepts, such as surface anatomy, intricate details of small organs, the complexities of nerves, vessels, lymphatics, and various intricate aspects . Moreover, for some students, the dissecting room becomes a source of stress and anxiety . Recognizing these challenges, an ever-growing body of educators, students, and researchers has posited that sole reliance on dissection may not fully equip medical students to meet the multifaceted demands of modern healthcare . In response to these evolving considerations, a substantial shift has swept through the realm of medical education in recent years. This transformation is characterized by the embrace of computer-based and multimedia-assisted educational tools, encompassing videos, animations, three-dimensional models, and virtual microscopy, all designed to elevate the teaching of anatomy . In such a landscape, Extended Reality (XR) technologies have emerged as powerful tools for enhancing the learning experience, particularly in the field of anatomy education. XR is an umbrella term that encompasses various immersive technologies, including Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), each offering unique ways to engage with and understand complex anatomical structures . Among these, VR has gained significant attention as an innovative method for augmenting anatomy education. To assess the efficacy of VR in anatomy education, a series of studies, as summarized in the provided abstracts, have been undertaken. These studies meticulously examine the utility of VR in comparison to traditional methods, including lectures and cadaveric dissection, consistently reflecting strong support for VR technology in enriching anatomical knowledge . Furthermore, the convergence of digital anatomy and VR within medical training is poised to propel advances in healthcare practices, harnessing strengths and embracing opportunities, while vigilantly acknowledging limitations . Several specific studies have spotlighted the enhanced understanding of anatomy achievable through VR-based methods, as evidenced by studies on heart anatomy and improved test scores . Additionally, the development of VR software tailored for cranial anatomy education underlines the technology’s potential applications in specialized domains . It is of paramount importance to underscore the necessity for standardized implementation and comprehensive assessment of VR in medical education . While the benefits of VR in anatomy education are well-established in the literature, the specific gap this work addresses is the application of VR tools, such as 3D-Organon, in unique educational and cultural settings. Notably, this study is the first to explore the implementation of VR in anatomy teaching at Qatar University. Our primary aim is to measure the receptiveness of these students to virtual anatomy dissection, thereby illuminating the transformative potential of this innovative approach. This study is particularly focused on establishing norms for learning preferences and evaluating the perceived effectiveness of 3D-Organon VR anatomy software within the context of anatomy courses. To achieve this, we delve into students’ perceptions and acceptance of VR technology, making comparisons with traditional learning tools such as plastic models and the Anatomage table, an advanced virtual dissection system, during anatomy practical sessions. In addition, our investigation delved into discerning any significant distinctions between students who utilized VR before and after engaging with other educational modalities. This comparison was integral to understanding the varied impacts of VR in the context of different educational sequences. Ultimately, our research strives to offer valuable insights into how VR can reshape medical education, enhancing the educational experiences of future physicians. Study population Participants for this study were drawn from Year-1 to Year-4 students in the College of Medicine (CMED) at Qatar University enrolled in anatomy lab courses during the 2023/2024 academic year. All students were asked to voluntarily participate in the study after reading and signing a consent form approved by the IRB committee at Qatar University (No. 1844-EA/23). In regular anatomy lab sessions, students used a variety of learning modalities, including plastic/plastinated models and the Anatomage table. As part of the study, an additional optional station featuring VR Oculus headset devices was introduced. Students were given a structured opportunity to explore the anatomical structures relevant to the designated lab session, focusing on specific body systems. Study design The primary objective of this study was to compare the perceived effectiveness of VR education models to standard anatomy education methods in enhancing students’ understanding of human anatomy. Additionally, the study aimed to assess students’ attitudes toward virtual anatomy dissection using 3D-Organon VR anatomy software, in comparison to their attitudes toward regular anatomy labs. To evaluate these aspects, an anonymous questionnaire was utilized. Each lab session lasted two hours and focused on a specific anatomical system. During these sessions, students had the opportunity to explore the corresponding anatomical structures using VR, alongside traditional learning tools such as the Anatomage table and plastic models. This alignment allowed for direct comparisons between the different modalities. Students were given 5 min to familiarize themselves with the VR headsets and software before proceeding. Following this brief orientation, they were given 10 min to explore the relevant anatomical structures using the VR devices. Data collection and analysis The questionnaire was distributed in a paper-based format to the students after they had used the various learning modalities in the lab. The collected data were processed and analyzed to generate descriptive statistics and identify correlations. The study focused on comparing the experiences of students who used VR before engaging with other educational modalities to those who used VR after other modalities. Fisher’s exact test for independence was used to rigorously examine associations between variables, given its appropriateness for categorical data analysis. All statistical analyses were conducted using GraphPad Prism V.9. Participants for this study were drawn from Year-1 to Year-4 students in the College of Medicine (CMED) at Qatar University enrolled in anatomy lab courses during the 2023/2024 academic year. All students were asked to voluntarily participate in the study after reading and signing a consent form approved by the IRB committee at Qatar University (No. 1844-EA/23). In regular anatomy lab sessions, students used a variety of learning modalities, including plastic/plastinated models and the Anatomage table. As part of the study, an additional optional station featuring VR Oculus headset devices was introduced. Students were given a structured opportunity to explore the anatomical structures relevant to the designated lab session, focusing on specific body systems. The primary objective of this study was to compare the perceived effectiveness of VR education models to standard anatomy education methods in enhancing students’ understanding of human anatomy. Additionally, the study aimed to assess students’ attitudes toward virtual anatomy dissection using 3D-Organon VR anatomy software, in comparison to their attitudes toward regular anatomy labs. To evaluate these aspects, an anonymous questionnaire was utilized. Each lab session lasted two hours and focused on a specific anatomical system. During these sessions, students had the opportunity to explore the corresponding anatomical structures using VR, alongside traditional learning tools such as the Anatomage table and plastic models. This alignment allowed for direct comparisons between the different modalities. Students were given 5 min to familiarize themselves with the VR headsets and software before proceeding. Following this brief orientation, they were given 10 min to explore the relevant anatomical structures using the VR devices. The questionnaire was distributed in a paper-based format to the students after they had used the various learning modalities in the lab. The collected data were processed and analyzed to generate descriptive statistics and identify correlations. The study focused on comparing the experiences of students who used VR before engaging with other educational modalities to those who used VR after other modalities. Fisher’s exact test for independence was used to rigorously examine associations between variables, given its appropriateness for categorical data analysis. All statistical analyses were conducted using GraphPad Prism V.9. The study enrolled 223 participants across various academic years, spanning Year-1 to Year-4, as depicted in (Fig. ). The mean age of participants stood at 19.3 years. In terms of gender distribution, the participants included 71 male students (31.8%) and 152 female students (68.2%). Furthermore, the demographic composition reflected 83 national Qatari students (37.2%) and 140 non-Qatari students (62.8%). Notably, the majority of non-Qatari students were from Egypt (15%), Jordan (12.1%), Syria (11.4%), and Pakistan (9.3%). Overall student perceptions on VR Enjoyment and learning 73% of respondents strongly agreed that they enjoyed learning anatomy through VR, underscoring a notably positive reception of VR technology in this educational domain (Q1, Fig. ). Effectiveness of VR Opinions varied regarding the sufficiency of VR for anatomical knowledge in the absence of traditional lectures. Responses indicated a spectrum of viewpoints, with 18% strongly agreeing, 19% agreeing, and 30% remaining neutral. In contrast, 24% disagreed, and 8% strongly disagreed (Q2, Fig. ). Enhancement of understanding When assessing VR’s impact on comprehending anatomy lectures, a majority − 58% strongly agreed and 30% agreed - acknowledged that VR significantly improves their understanding (Q3, Fig. ). Memorization and academic performance While 37% of respondents strongly agreed that VR aided in better memorization of anatomical details, 42% agreed, and 17% remained neutral (Q4, Fig. ). Furthermore, 41% strongly agreed that regular VR labs could potentially bolster their grades in anatomy exams, with 35% in agreement (Q5, Fig. ). Engagement and motivation The exploration of engagement and motivation levels in VR anatomy labs revealed that 58% strongly agreed, 29% agreed, and 12% remained neutral (Q6, Fig. ). Preferences in learning environment Findings suggested a strong inclination towards group-based VR learning, with 63% strongly agreeing, 27% agreeing, and 7% remaining neutral (Q7, Fig. ). Additionally, 31% strongly agreed, 36% agreed, and 23% were neutral in their preference for an instructor-guided VR experience (Q8, Fig. ). A significant 69% strongly agreed that having unrestricted access to VR for self-directed learning was preferable, with 23% in agreement (Q9, Fig. ). Preference for replacing traditional methods Opinions diverged concerning the substitution of traditional anatomy education methods with VR technology. Notably, 33% strongly agreed, 20% agreed, 34% were neutral, 11% disagreed, and 2% strongly disagreed regarding replacing virtual anatomy dissection (Anatomage table) with VR (Q10, Fig. ). Similarly, 24% strongly agreed, 13% agreed, 27% were neutral, 30% disagreed, and 6% strongly disagreed regarding replacing plastic/plastinated anatomy models with VR (Q11, Fig. ). A substantial 69% strongly agreed, 24% agreed, and 7% were neutral on preferring a combined approach utilizing VR, Anatomage, and plastic/plastinated anatomy models (Q12, Fig. ). Recommendation and reasons We found that 56% strongly agreed, 37% agreed, 6% were neutral, and 1% disagreed that they would recommend the use of VR for other students and courses, particularly in medical fields such as radiology and pathology, where visualizing both radiological and pathological changes can significantly enhance learning (Q13, Fig. ). Regarding reasons for studying anatomy through VR, 32% agreed, 30% were neutral, 31% disagreed, 6% strongly disagreed, and 1% remained uncertain (Q14, Fig. ). Consideration of class size Regarding the impact of class size on anatomy education, 37% strongly agreed, 32% agreed, 24% were neutral, 6% disagreed, and none strongly disagreed when considering the large number of medical students as a motivation to study anatomy through VR (Q15, Fig. ). Navigating Student preferences Preferences in anatomy learning methods Respondents expressed their preference for learning anatomy through various methods. The majority favored VR (88.8%), followed by plastic/plastinated models (79.8%), Anatomage (46.6%), and other methods (10.3%), primarily citing cadavers and mobile apps (Q16, Fig. ). Favorite method for learning anatomy When asked to choose their favorite method for learning anatomy, VR garnered the highest preference at 48.9%, followed by plastic/plastinated models (35.0%), Anatomage (9.4%), and other methods (6.3%), notably including cadavers and textbooks (Q17, Fig. ). VR before vs. after other educational modalities In examining the impact of engaging with other educational modalities on students’ perceptions of VR in anatomy education, several noteworthy trends emerged. VR’s impact on understanding anatomy lectures A significant difference was observed when comparing the responses between the group that was exposed to VR prior to engaging with other educational modalities (such as Anatomage and plastic/plastinated anatomy models, referred to as VR1) and the group that was exposed to VR after using these modalities (VR2). Prior to engaging with other modalities, 46% of students in VR1 strongly agreed that VR improved their understanding of anatomy lectures, whereas 64% in VR2 expressed a strong agreement. This shift suggests an increased positive perception in understanding anatomy lectures after exposure to additional educational modalities (Q3, Table ). Expectations on VR labs and academic performance In assessing beliefs regarding the impact of VR labs on academic performance, a noticeable change was observed. In VR1, 31% strongly agreed that implementing more VR lab sessions into the students’ weekly class schedules would help improve grades, compared to 46% in VR2. This suggests an increased positive expectation regarding the contribution of VR labs to academic performance after exposure to other educational modalities (Q5, Table ). Preference for VR in practical sessions Students’ preferences for VR as a replacement in practical sessions exhibited a shift. In VR1, 24% strongly agreed, while 38% in VR2 expressed a strong agreement. Conversely, the percentage of students neutral or in disagreement decreased after exposure to additional educational modalities, indicating a shift in preference towards using VR as a replacement (Q10, Table ). Recommendation of VR for other students and courses When considering students’ willingness to recommend VR, the data indicated a notable change. In VR1, 43% strongly agreed to recommend VR, while in VR2, 62% expressed a strong agreement. This implies an increased likelihood of recommending VR to other students and courses after exposure to additional educational modalities (Q13, Table ). Preferred method for learning anatomy Examining preferences for learning anatomy, the shift in favor of VR was evident after engaging with other educational modalities. In VR1, 39% preferred VR, whereas in VR2, this percentage increased to 54%. This shift underscores a heightened preference for VR as a learning method after exposure to additional educational modalities (Q17, Table ). Enjoyment and learning 73% of respondents strongly agreed that they enjoyed learning anatomy through VR, underscoring a notably positive reception of VR technology in this educational domain (Q1, Fig. ). Effectiveness of VR Opinions varied regarding the sufficiency of VR for anatomical knowledge in the absence of traditional lectures. Responses indicated a spectrum of viewpoints, with 18% strongly agreeing, 19% agreeing, and 30% remaining neutral. In contrast, 24% disagreed, and 8% strongly disagreed (Q2, Fig. ). Enhancement of understanding When assessing VR’s impact on comprehending anatomy lectures, a majority − 58% strongly agreed and 30% agreed - acknowledged that VR significantly improves their understanding (Q3, Fig. ). Memorization and academic performance While 37% of respondents strongly agreed that VR aided in better memorization of anatomical details, 42% agreed, and 17% remained neutral (Q4, Fig. ). Furthermore, 41% strongly agreed that regular VR labs could potentially bolster their grades in anatomy exams, with 35% in agreement (Q5, Fig. ). Engagement and motivation The exploration of engagement and motivation levels in VR anatomy labs revealed that 58% strongly agreed, 29% agreed, and 12% remained neutral (Q6, Fig. ). Preferences in learning environment Findings suggested a strong inclination towards group-based VR learning, with 63% strongly agreeing, 27% agreeing, and 7% remaining neutral (Q7, Fig. ). Additionally, 31% strongly agreed, 36% agreed, and 23% were neutral in their preference for an instructor-guided VR experience (Q8, Fig. ). A significant 69% strongly agreed that having unrestricted access to VR for self-directed learning was preferable, with 23% in agreement (Q9, Fig. ). Preference for replacing traditional methods Opinions diverged concerning the substitution of traditional anatomy education methods with VR technology. Notably, 33% strongly agreed, 20% agreed, 34% were neutral, 11% disagreed, and 2% strongly disagreed regarding replacing virtual anatomy dissection (Anatomage table) with VR (Q10, Fig. ). Similarly, 24% strongly agreed, 13% agreed, 27% were neutral, 30% disagreed, and 6% strongly disagreed regarding replacing plastic/plastinated anatomy models with VR (Q11, Fig. ). A substantial 69% strongly agreed, 24% agreed, and 7% were neutral on preferring a combined approach utilizing VR, Anatomage, and plastic/plastinated anatomy models (Q12, Fig. ). Recommendation and reasons We found that 56% strongly agreed, 37% agreed, 6% were neutral, and 1% disagreed that they would recommend the use of VR for other students and courses, particularly in medical fields such as radiology and pathology, where visualizing both radiological and pathological changes can significantly enhance learning (Q13, Fig. ). Regarding reasons for studying anatomy through VR, 32% agreed, 30% were neutral, 31% disagreed, 6% strongly disagreed, and 1% remained uncertain (Q14, Fig. ). Consideration of class size Regarding the impact of class size on anatomy education, 37% strongly agreed, 32% agreed, 24% were neutral, 6% disagreed, and none strongly disagreed when considering the large number of medical students as a motivation to study anatomy through VR (Q15, Fig. ). 73% of respondents strongly agreed that they enjoyed learning anatomy through VR, underscoring a notably positive reception of VR technology in this educational domain (Q1, Fig. ). Opinions varied regarding the sufficiency of VR for anatomical knowledge in the absence of traditional lectures. Responses indicated a spectrum of viewpoints, with 18% strongly agreeing, 19% agreeing, and 30% remaining neutral. In contrast, 24% disagreed, and 8% strongly disagreed (Q2, Fig. ). When assessing VR’s impact on comprehending anatomy lectures, a majority − 58% strongly agreed and 30% agreed - acknowledged that VR significantly improves their understanding (Q3, Fig. ). While 37% of respondents strongly agreed that VR aided in better memorization of anatomical details, 42% agreed, and 17% remained neutral (Q4, Fig. ). Furthermore, 41% strongly agreed that regular VR labs could potentially bolster their grades in anatomy exams, with 35% in agreement (Q5, Fig. ). The exploration of engagement and motivation levels in VR anatomy labs revealed that 58% strongly agreed, 29% agreed, and 12% remained neutral (Q6, Fig. ). Findings suggested a strong inclination towards group-based VR learning, with 63% strongly agreeing, 27% agreeing, and 7% remaining neutral (Q7, Fig. ). Additionally, 31% strongly agreed, 36% agreed, and 23% were neutral in their preference for an instructor-guided VR experience (Q8, Fig. ). A significant 69% strongly agreed that having unrestricted access to VR for self-directed learning was preferable, with 23% in agreement (Q9, Fig. ). Opinions diverged concerning the substitution of traditional anatomy education methods with VR technology. Notably, 33% strongly agreed, 20% agreed, 34% were neutral, 11% disagreed, and 2% strongly disagreed regarding replacing virtual anatomy dissection (Anatomage table) with VR (Q10, Fig. ). Similarly, 24% strongly agreed, 13% agreed, 27% were neutral, 30% disagreed, and 6% strongly disagreed regarding replacing plastic/plastinated anatomy models with VR (Q11, Fig. ). A substantial 69% strongly agreed, 24% agreed, and 7% were neutral on preferring a combined approach utilizing VR, Anatomage, and plastic/plastinated anatomy models (Q12, Fig. ). We found that 56% strongly agreed, 37% agreed, 6% were neutral, and 1% disagreed that they would recommend the use of VR for other students and courses, particularly in medical fields such as radiology and pathology, where visualizing both radiological and pathological changes can significantly enhance learning (Q13, Fig. ). Regarding reasons for studying anatomy through VR, 32% agreed, 30% were neutral, 31% disagreed, 6% strongly disagreed, and 1% remained uncertain (Q14, Fig. ). Regarding the impact of class size on anatomy education, 37% strongly agreed, 32% agreed, 24% were neutral, 6% disagreed, and none strongly disagreed when considering the large number of medical students as a motivation to study anatomy through VR (Q15, Fig. ). Preferences in anatomy learning methods Respondents expressed their preference for learning anatomy through various methods. The majority favored VR (88.8%), followed by plastic/plastinated models (79.8%), Anatomage (46.6%), and other methods (10.3%), primarily citing cadavers and mobile apps (Q16, Fig. ). Favorite method for learning anatomy When asked to choose their favorite method for learning anatomy, VR garnered the highest preference at 48.9%, followed by plastic/plastinated models (35.0%), Anatomage (9.4%), and other methods (6.3%), notably including cadavers and textbooks (Q17, Fig. ). Respondents expressed their preference for learning anatomy through various methods. The majority favored VR (88.8%), followed by plastic/plastinated models (79.8%), Anatomage (46.6%), and other methods (10.3%), primarily citing cadavers and mobile apps (Q16, Fig. ). When asked to choose their favorite method for learning anatomy, VR garnered the highest preference at 48.9%, followed by plastic/plastinated models (35.0%), Anatomage (9.4%), and other methods (6.3%), notably including cadavers and textbooks (Q17, Fig. ). In examining the impact of engaging with other educational modalities on students’ perceptions of VR in anatomy education, several noteworthy trends emerged. VR’s impact on understanding anatomy lectures A significant difference was observed when comparing the responses between the group that was exposed to VR prior to engaging with other educational modalities (such as Anatomage and plastic/plastinated anatomy models, referred to as VR1) and the group that was exposed to VR after using these modalities (VR2). Prior to engaging with other modalities, 46% of students in VR1 strongly agreed that VR improved their understanding of anatomy lectures, whereas 64% in VR2 expressed a strong agreement. This shift suggests an increased positive perception in understanding anatomy lectures after exposure to additional educational modalities (Q3, Table ). Expectations on VR labs and academic performance In assessing beliefs regarding the impact of VR labs on academic performance, a noticeable change was observed. In VR1, 31% strongly agreed that implementing more VR lab sessions into the students’ weekly class schedules would help improve grades, compared to 46% in VR2. This suggests an increased positive expectation regarding the contribution of VR labs to academic performance after exposure to other educational modalities (Q5, Table ). Preference for VR in practical sessions Students’ preferences for VR as a replacement in practical sessions exhibited a shift. In VR1, 24% strongly agreed, while 38% in VR2 expressed a strong agreement. Conversely, the percentage of students neutral or in disagreement decreased after exposure to additional educational modalities, indicating a shift in preference towards using VR as a replacement (Q10, Table ). Recommendation of VR for other students and courses When considering students’ willingness to recommend VR, the data indicated a notable change. In VR1, 43% strongly agreed to recommend VR, while in VR2, 62% expressed a strong agreement. This implies an increased likelihood of recommending VR to other students and courses after exposure to additional educational modalities (Q13, Table ). Preferred method for learning anatomy Examining preferences for learning anatomy, the shift in favor of VR was evident after engaging with other educational modalities. In VR1, 39% preferred VR, whereas in VR2, this percentage increased to 54%. This shift underscores a heightened preference for VR as a learning method after exposure to additional educational modalities (Q17, Table ). A significant difference was observed when comparing the responses between the group that was exposed to VR prior to engaging with other educational modalities (such as Anatomage and plastic/plastinated anatomy models, referred to as VR1) and the group that was exposed to VR after using these modalities (VR2). Prior to engaging with other modalities, 46% of students in VR1 strongly agreed that VR improved their understanding of anatomy lectures, whereas 64% in VR2 expressed a strong agreement. This shift suggests an increased positive perception in understanding anatomy lectures after exposure to additional educational modalities (Q3, Table ). In assessing beliefs regarding the impact of VR labs on academic performance, a noticeable change was observed. In VR1, 31% strongly agreed that implementing more VR lab sessions into the students’ weekly class schedules would help improve grades, compared to 46% in VR2. This suggests an increased positive expectation regarding the contribution of VR labs to academic performance after exposure to other educational modalities (Q5, Table ). Students’ preferences for VR as a replacement in practical sessions exhibited a shift. In VR1, 24% strongly agreed, while 38% in VR2 expressed a strong agreement. Conversely, the percentage of students neutral or in disagreement decreased after exposure to additional educational modalities, indicating a shift in preference towards using VR as a replacement (Q10, Table ). When considering students’ willingness to recommend VR, the data indicated a notable change. In VR1, 43% strongly agreed to recommend VR, while in VR2, 62% expressed a strong agreement. This implies an increased likelihood of recommending VR to other students and courses after exposure to additional educational modalities (Q13, Table ). Examining preferences for learning anatomy, the shift in favor of VR was evident after engaging with other educational modalities. In VR1, 39% preferred VR, whereas in VR2, this percentage increased to 54%. This shift underscores a heightened preference for VR as a learning method after exposure to additional educational modalities (Q17, Table ). The primary focus of this study was to assess how medical students perceive and engage with VR technology within the realm of anatomy education. Through a comprehensive survey comprising 17 questions, we aimed to probe into the diverse attitudes and opinions of students, covering various aspects of VR implementation in anatomy education. The resulting insights illuminate the landscape of student preferences and attitudes toward the integration of VR into their anatomy learning experiences. Integration of VR in anatomy education The integration of VR into medical education, particularly in anatomy instruction, represents a paradigm shift from traditional teaching methods. This study aligns with the broader trend in medical education, reflecting a departure from exclusive reliance on cadaveric dissection towards a more diversified approach that incorporates technology-enhanced learning. The limitations of traditional dissection , including challenges in conveying certain anatomical concepts and the emotional stress experienced by students, have been well-documented in the literature. Dissection, while beneficial for surgical skill development and understanding whole-body pathology , has also been associated with significant emotional and psychological stress, particularly among students unfamiliar with cadavers or unprepared for the dissecting room experience . The stress experienced in these environments, ranging from intrusive thoughts to symptoms resembling post-traumatic stress disorder (PTSD), underscores the need for supplementary educational modalities . These findings highlight the potential value of integrating VR as an adjunct to traditional methods. Comparison with previous research Several studies mentioned in the introduction consistently support the idea that VR technology contributes positively to anatomical education . The observed positive reception of VR in this study, evidenced by the majority expressing enjoyment (73%), improved understanding (58%), and better memorization (79%), aligns with findings from studies focusing on specific anatomical areas like heart anatomy . The endorsement of VR by the majority of students reflects a trend observed in other studies emphasizing the effectiveness of VR in enhancing test scores . Effectiveness and limitations of VR While the positive feedback on enjoyment and learning is encouraging, opinions on the sufficiency of VR as a standalone tool for anatomical knowledge were more diverse. This aligns with the existing discourse in the literature that acknowledges the benefits of VR but also calls for a balanced approach that integrates it with traditional methods . The study underscores the importance of considering a combined approach, recognizing that VR, while beneficial, may not entirely replace traditional methods. Preferences and learning environment The preference for group-based VR learning (63%), instructor-guided experiences (67%), and unrestricted access for self-directed learning (69%) highlights the nuanced nature of student preferences. This aligns with literature emphasizing the significance of a student-centered, flexible learning environment in medical education . The study’s findings also resonate with the broader discourse on the importance of collaborative and guided learning experiences in the context of VR . Substitution of traditional methods The study delves into students’ willingness to replace traditional methods with VR, revealing varied opinions. The inclination to replace traditional methods, such as virtual anatomy dissection (67%) and plastic models (57%), with VR suggests a readiness for technological integration. However, a significant portion of students remains neutral, emphasizing the need for a balanced approach that caters to diverse preferences. This aligns with literature advocating for a thoughtful and gradual integration of technology into medical education . Impact of exposure to educational modalities The notable shifts in student perceptions after exposure to additional educational modalities highlight the dynamic nature of attitudes towards VR. The increased positive perception in understanding anatomy lectures (Q3) and the heightened preference for VR as a learning method (Q17) after exposure to other modalities underscore the potential influence of varied learning experiences on students’ views. Recommendation and future directions The majority’s willingness to recommend VR for other students and courses (93%) suggests a positive outlook on the technology’s broader applicability. The study contributes to the literature by emphasizing the importance of understanding students’ perspectives in shaping the future of medical education. Future research could explore the long-term impact of VR integration, consider faculty perspectives, and investigate the optimal balance between traditional and technological approaches in anatomy education . The integration of VR into medical education, particularly in anatomy instruction, represents a paradigm shift from traditional teaching methods. This study aligns with the broader trend in medical education, reflecting a departure from exclusive reliance on cadaveric dissection towards a more diversified approach that incorporates technology-enhanced learning. The limitations of traditional dissection , including challenges in conveying certain anatomical concepts and the emotional stress experienced by students, have been well-documented in the literature. Dissection, while beneficial for surgical skill development and understanding whole-body pathology , has also been associated with significant emotional and psychological stress, particularly among students unfamiliar with cadavers or unprepared for the dissecting room experience . The stress experienced in these environments, ranging from intrusive thoughts to symptoms resembling post-traumatic stress disorder (PTSD), underscores the need for supplementary educational modalities . These findings highlight the potential value of integrating VR as an adjunct to traditional methods. Several studies mentioned in the introduction consistently support the idea that VR technology contributes positively to anatomical education . The observed positive reception of VR in this study, evidenced by the majority expressing enjoyment (73%), improved understanding (58%), and better memorization (79%), aligns with findings from studies focusing on specific anatomical areas like heart anatomy . The endorsement of VR by the majority of students reflects a trend observed in other studies emphasizing the effectiveness of VR in enhancing test scores . While the positive feedback on enjoyment and learning is encouraging, opinions on the sufficiency of VR as a standalone tool for anatomical knowledge were more diverse. This aligns with the existing discourse in the literature that acknowledges the benefits of VR but also calls for a balanced approach that integrates it with traditional methods . The study underscores the importance of considering a combined approach, recognizing that VR, while beneficial, may not entirely replace traditional methods. The preference for group-based VR learning (63%), instructor-guided experiences (67%), and unrestricted access for self-directed learning (69%) highlights the nuanced nature of student preferences. This aligns with literature emphasizing the significance of a student-centered, flexible learning environment in medical education . The study’s findings also resonate with the broader discourse on the importance of collaborative and guided learning experiences in the context of VR . The study delves into students’ willingness to replace traditional methods with VR, revealing varied opinions. The inclination to replace traditional methods, such as virtual anatomy dissection (67%) and plastic models (57%), with VR suggests a readiness for technological integration. However, a significant portion of students remains neutral, emphasizing the need for a balanced approach that caters to diverse preferences. This aligns with literature advocating for a thoughtful and gradual integration of technology into medical education . The notable shifts in student perceptions after exposure to additional educational modalities highlight the dynamic nature of attitudes towards VR. The increased positive perception in understanding anatomy lectures (Q3) and the heightened preference for VR as a learning method (Q17) after exposure to other modalities underscore the potential influence of varied learning experiences on students’ views. The majority’s willingness to recommend VR for other students and courses (93%) suggests a positive outlook on the technology’s broader applicability. The study contributes to the literature by emphasizing the importance of understanding students’ perspectives in shaping the future of medical education. Future research could explore the long-term impact of VR integration, consider faculty perspectives, and investigate the optimal balance between traditional and technological approaches in anatomy education . In conclusion, this study adds valuable insights to the discussion on integrating VR into anatomy education. The positive reception of VR by medical students and the diversity of opinions emphasize the need for a flexible approach. Exposure to alternative educational methods proves influential in shaping students’ favorable views of VR, extending beyond mere reception to impact overall perceptions, preferences, and expectations in anatomy education. The findings stress the importance of a blended approach that combines technological innovation with traditional teaching methods, highlighting the significance of adaptability in shaping the future of anatomy education. |
Sleep medicine and chronobiology education among Brazilian medical students | 27213e46-5746-43c1-bc59-c68e9279f434 | 11653483 | Internal Medicine[mh] | Sleep medicine and chronobiology are two prominent fields of knowledge that encompass vital aspects of human health, namely sleep and other biological rhythms. The recent recognition of these interconnected sciences was underscored by the prestigious 2017 Nobel Prize in Physiology or Medicine, bestowed upon Jeffrey C. Hall, Michael Rosbash, and Michael W. Young for their groundbreaking contributions to understanding the circadian rhythm . Chronobiology is the science of the intricate biological rhythms governing various organisms. Its primary focus is the study of the impact of time on biological events and the internal biological clocks that regulate these rhythms. Over the past several decades, chronobiology has evolved into a multidisciplinary field of great interest, particularly in the realm of general medicine . Besides the sleep-wake cycle, other biological rhythms include heart rate, respiratory rate, menstrual cycle, and pulsatile hormonal secretion . Among the circadian rhythms (around 24 h), the sleep-wake rhythm emerges as the most extensively studied and discussed in relation to human health, with a substantial portion of the global population experiencing sleep-related issues . Given that a multitude of human functions, both physical and cognitive, display circadian rhythmicity, it intuitively follows that disturbances in the endogenous machinery regulating these oscillations could lead to physical and mental symptoms, as well as pathological conditions . Throughout history, the medical literature had a limited focus on sleep disorders, primarily encompassing disturbances perceived as troublesome by those affected, such as insomnia . Other sleep disorders arising from physiological system malfunctions during sleep, such as sleep-related respiratory disorders, remained largely unknown or overlooked until the advent of sleep monitoring techniques, particularly in the second half of the 20th century . The establishment of sleep medicine as an independent medical specialty, along with its diagnostic procedures and therapeutic strategies, became possible thanks to seminal discoveries in neurophysiology and basic sleep research . These milestones marked a crucial turning point in the field of sleep medicine, enabling it to evolve into a specialized discipline with its own unique contributions to the medical landscape. Over the past three decades, several studies have highlighted a significant gap in education on sleep and chronobiology in medical curricula worldwide. For example, Romiszewski et al. demonstrated that education on sleep remains insufficient in medical schools in the United Kingdom, even after twenty years of recognizing the importance of the subject. Similar studies conducted in the United States , Saudi Arabia , China , and Lebanon support these observations and highlight the urgent need to integrate these topics into medical curricula. Previous data from our research group have indicated a limited interaction between chronobiology and psychology in the country . Additionally, despite the fact that most psychologists report an increase in patients with sleep-related issues, a lack of familiarity with basic concepts of chronobiology and sleep science has been identified among psychologists, likely because 75.97% of them had no academic contact with biological rhythms during their training . This suggests that education needs to be expanded not only in medicine but also in other health fields. Similarly, chronobiology is not typically included in the education of biology or medical students in the majority of European countries . The current situation regarding the inclusion of these subjects in the curriculum of Brazilian medical schools remains unclear. However, we hypothesize that there is a significant underrepresentation of these topics, which may contribute to the underdiagnosis and undertreatment of sleep disorders in the country. Furthermore, this study can serve as a foundation for future research aimed at advancing the study of sleep medicine and chronobiology within medical schools, due to its contribution in providing an overview of the Brazilian landscape concerning the incorporation of these subjects into the curriculum. Thus, our objective was to assess the exposure of Brazilian medical students in their final two years of undergraduate medical education to the fields of chronobiology and sleep medicine and to evaluate their general knowledge in these areas. Study design and period A cross-sectional study was conducted from December 1, 2021 to June 30, 2022. This study used self-reported online questionnaires and was conducted in accordance with the provisions of the Declaration of Helsinki and approved by the Ethical Committee of the State University of Santa Cruz (CEP-UESC) under Certificate of Presentation for Ethical Appreciation (CAAE; #52462921.0.0000.5526). Written informed consent was obtained from all the participants. The Google Forms platform was used to obtain informed consent and responses. Sampling and recruitment The research targeted students in the final two years of undergraduate medical programs at Brazilian medical schools that are duly registered and listed on the e-MEC portal maintained by the Ministry of Education. The sample was obtained by contacting undergraduate medical students in social media networks associated with medical schools, including athletic clubs, academy centers, extension projects, and similar platforms. In this initial communication, the research title, the desired participant profile, and the researchers contact information were provided. The aim was to emphasize the significance of their participation and address any inquiries. The inclusion criteria were outlined as follows: 1) being an undergraduate medical student at a Brazilian educational institution; 2) enrollment in the last two years of the program; and 3) specifying whether they were students in the fifth or sixth year. Upon their agreement to participate in the study, data collection was conducted using a self-administered questionnaire within a virtual environment. The questionnaire was hosted on a freely accessible platform (Google Forms), and participants were asked to share the link with their classmates. Access was granted after participants provided an email for identification purposes and agreed to the free and informed consent form. Once these steps were completed, participants were granted access to the subsequent stage, which encompassed the questionnaire. According to the 2020 Medical Demographics study in Brazil, the number of undergraduate medical students participating in the National Student Performance Exam (ENADE) in higher education institutions was 20,618 in 2019 . This number was used to calculate the sample size, assuming a confidence level of 95% and a margin of error of 5%, which resulted in a minimum of 240 students. All study participants volunteered to participate in the study, resulting in the recruitment of 243 students. One duplicate response, one individual outside the target audience, and one individual who did not provide responses to any items on the form were excluded from the analysis. Hence, a total of 240 samples were included and analyzed for the study. Questionnaire The online questionnaire was developed by the authors in Portuguese (English version: Supplementary Table S1 ; original Portuguese version: Supplementary Table S2 ) and was structured into three sections. I. Student data: age (18-24 or >24), sex, academic year (5th or 6th year), the Brazilian state where the medical school is located, and the name of the institution. II. Research data: exposure to chronobiology and sleep medicine across basic, clinical, and internship cycles. This section was created by the researchers to identify the subjects’ knowledge of sleep medicine and chronobiology within their overall medical school curriculum, rather than within specific course. III. Questionnaire on Basic Knowledge in Sleep Medicine and Chronobiology: This section included ten questions adapted from the Assessing Sleep Knowledge in Medical Education - ASKME . The ASKME questions selected were intended to be the most general questions regarding the sleep-wake cycle. The questionnaire was designed to evaluate long-term memory rather than highly specific technical knowledge or recent memory. It examined the student's exposure to the topic during their undergraduate studies using true, false, or “don't know” questions . Statistical analysis The questionnaires were checked for completeness, coded, and entered into a Microsoft Excel table and then exported to SPSS (v.22, IBM, USA) for analysis. Categorical variables are reported as frequencies and percentages. In the analysis related to the internship cycle, we exclusively considered students in their final year (6th year). Bivariate analysis was used primarily to check the association of independent variables with the dependent variable (≥80% of correct answers in the questionnaire). Multiple logistic regression models were conducted, using age, sex, country region, and academic year as covariates for all other independent variables with the aim of analyzing potential confounding effects. The first category of each independent variable was considered as the reference group. The variables with a significant association were identified based on odds ratio (OR) with P-value ≤0.05. Additionally, Spearman's correlation was used to assess the relationship between the number of cycles in the medical program during which students were exposed to chronobiology or sleep medicine and the percentage of correct answers regarding basic chronobiology and sleep medicine knowledge. A cross-sectional study was conducted from December 1, 2021 to June 30, 2022. This study used self-reported online questionnaires and was conducted in accordance with the provisions of the Declaration of Helsinki and approved by the Ethical Committee of the State University of Santa Cruz (CEP-UESC) under Certificate of Presentation for Ethical Appreciation (CAAE; #52462921.0.0000.5526). Written informed consent was obtained from all the participants. The Google Forms platform was used to obtain informed consent and responses. The research targeted students in the final two years of undergraduate medical programs at Brazilian medical schools that are duly registered and listed on the e-MEC portal maintained by the Ministry of Education. The sample was obtained by contacting undergraduate medical students in social media networks associated with medical schools, including athletic clubs, academy centers, extension projects, and similar platforms. In this initial communication, the research title, the desired participant profile, and the researchers contact information were provided. The aim was to emphasize the significance of their participation and address any inquiries. The inclusion criteria were outlined as follows: 1) being an undergraduate medical student at a Brazilian educational institution; 2) enrollment in the last two years of the program; and 3) specifying whether they were students in the fifth or sixth year. Upon their agreement to participate in the study, data collection was conducted using a self-administered questionnaire within a virtual environment. The questionnaire was hosted on a freely accessible platform (Google Forms), and participants were asked to share the link with their classmates. Access was granted after participants provided an email for identification purposes and agreed to the free and informed consent form. Once these steps were completed, participants were granted access to the subsequent stage, which encompassed the questionnaire. According to the 2020 Medical Demographics study in Brazil, the number of undergraduate medical students participating in the National Student Performance Exam (ENADE) in higher education institutions was 20,618 in 2019 . This number was used to calculate the sample size, assuming a confidence level of 95% and a margin of error of 5%, which resulted in a minimum of 240 students. All study participants volunteered to participate in the study, resulting in the recruitment of 243 students. One duplicate response, one individual outside the target audience, and one individual who did not provide responses to any items on the form were excluded from the analysis. Hence, a total of 240 samples were included and analyzed for the study. The online questionnaire was developed by the authors in Portuguese (English version: Supplementary Table S1 ; original Portuguese version: Supplementary Table S2 ) and was structured into three sections. I. Student data: age (18-24 or >24), sex, academic year (5th or 6th year), the Brazilian state where the medical school is located, and the name of the institution. II. Research data: exposure to chronobiology and sleep medicine across basic, clinical, and internship cycles. This section was created by the researchers to identify the subjects’ knowledge of sleep medicine and chronobiology within their overall medical school curriculum, rather than within specific course. III. Questionnaire on Basic Knowledge in Sleep Medicine and Chronobiology: This section included ten questions adapted from the Assessing Sleep Knowledge in Medical Education - ASKME . The ASKME questions selected were intended to be the most general questions regarding the sleep-wake cycle. The questionnaire was designed to evaluate long-term memory rather than highly specific technical knowledge or recent memory. It examined the student's exposure to the topic during their undergraduate studies using true, false, or “don't know” questions . The questionnaires were checked for completeness, coded, and entered into a Microsoft Excel table and then exported to SPSS (v.22, IBM, USA) for analysis. Categorical variables are reported as frequencies and percentages. In the analysis related to the internship cycle, we exclusively considered students in their final year (6th year). Bivariate analysis was used primarily to check the association of independent variables with the dependent variable (≥80% of correct answers in the questionnaire). Multiple logistic regression models were conducted, using age, sex, country region, and academic year as covariates for all other independent variables with the aim of analyzing potential confounding effects. The first category of each independent variable was considered as the reference group. The variables with a significant association were identified based on odds ratio (OR) with P-value ≤0.05. Additionally, Spearman's correlation was used to assess the relationship between the number of cycles in the medical program during which students were exposed to chronobiology or sleep medicine and the percentage of correct answers regarding basic chronobiology and sleep medicine knowledge. A total of 240 students responded to the questionnaires. The most represented age group was 18 to 24 years, corresponding to 51.8% of the total. Additionally, 63.2% of the participants were women (n=152). Responses were obtained from students across the country, representing 96 institutions, with higher representation from the northeast (36.5%) and southeast (19.9%) regions. There was a maximum of 11 responses from two universities, namely the Universidade Estadual de Santa Cruz (UESC) and the Universidade Federal do Amazonas (UFAM), while the other universities had a smaller and similar number of responses. Regarding their stage in medical school, 153 students were in the 5th year and 87 students were in the 6th and final year . Sequentially, we examined whether students were exposed to sleep medicine or chronobiology throughout their medical undergraduate disciplines . Our observations revealed a strong exposure during the basic cycle (first two years of the undergraduate medical program in Brazil; 87.5%), with a gradual decline during the clinical cycle (third and fourth years of the undergraduate medical program in Brazil; 77.1%) and the internship phase (last two years of the undergraduate medical program in Brazil; 65.5%). Additionally, 11 respondents (4.6% of the total) reported having had no contact with the subject throughout their entire undergraduate education, while 229 (95.4% of the total) had some exposure at some point. Notably, 62.1% had coursework related to these issues during all cycles of the medical program . The students also selected the disciplines related to sleep medicine or chronobiology during the basic cycle that the offered contents mainly covered elementary knowledge of sleep physiology (84.8%), neuroanatomical substrates of sleep and wakefulness (46.1%), and chronobiology of sleep and wakefulness (98.0%) . During the clinical cycle, the students indicated that the offered contents mainly covered sleep hygiene (64.6%), insomnia (61.2%), and treatment of sleep disorders (57.9%), but there was less contact with sleep diagnostics and investigation (25.4%) . During the internship, which was the current cycle of the respondents' program, they studied sleep medicine or chronobiology, particularly in disciplines such as psychiatry (28.7%), clinical medicine (20.7%), and family medicine (19.5%) . Additionally, when asked about receiving guidance from professors on addressing sleep-related aspects during patient history taking (anamnesis) and diagnosis, 24 students (27.6%) reported not receiving such guidance, while 63 students (72.4%) reported having received guidance . Referring to all periods of their medical undergraduate program, when asked about the existence of any other mandatory core curriculum programs or elective modules dedicated to sleep medicine and chronobiology, only 9 students (3.7%) responded yes, with 6 being optional and 3 being mandatory (data not shown). In addition, when asked if sleep medicine or chronobiology content was covered in elective courses and at which period this occurred, 37 (15.4%) students responded yes, mainly during the basic cycle of the program (data not shown). When asked about barriers in the training in chronobiology and sleep medicine, 161 students (67.1%) cited insufficient dedicated time as the primary obstacle . Additionally, 131 students (54.6%) highlighted the insufficient immersion of students in clinical settings, including the low availability of outpatient clinics and practical experiences, as another significant challenge. Furthermore, minor issues such as a shortage of qualified faculty, inadequate educational resources, and ineffective administrative policies were also noted . In the questionnaire of basic knowledge on sleep medicine and chronobiology, the average rate of correct answers was 79.75% (50-100%) across all questions. Notably, the highest error rates were observed in specific topics: sleep and pre-adolescence (with 82.08% of wrong answers) and the influence of drugs such as antihistamines and beta-blockers on sleep (with 48.33% of wrong answers) . Conversely, questions related to vital signs and circadian rhythms exhibited a moderate level of accuracy, with 25.83% of wrong answers, and those concerning work shifts and sleep had an error rate of 27.92%. Lastly, the remaining questions demonstrated a higher percentage of correct answers, ranging between 92.92 and 98.75% . shows a mild positive correlation between the number of cycles (including basic, clinical, and internship, or none) during which students were exposed to disciplines related to sleep medicine or chronobiology and the number of correct answers to corresponding questions. The logistic regression analyses, both univariate and multivariate, indicated that a higher percentage of correct answers (≥80%) was not associated with most independent variables, such as sex, age, and country region . However, a noteworthy association was found with academic year in the medical program, demonstrating that final-year students performed better than their counterparts in the fifth year (reference group). Additionally, despite the absence of an association with exposure to chronobiology or sleep medicine disciplines in each cycle of the medical program, the highest prevalence of correct answers was observed among students who engaged with these topics throughout all cycles . Similar results were observed with multivariate analysis, indicating that the findings were not influenced by age, sex, country region, or academic year. In general, many medical students reported a low exposure to chronobiology and sleep medicine during their undergraduate studies. More specifically, studies in chronobiology and sleep medicine were more prevalent during the basic cycle of the medical program, characterized by an introductory approach to human physiology topics. However, the availability of these subjects decreased in subsequent years. The investigative questionnaire revealed that studies in sleep medicine and chronobiology were offered to 210 students, constituting 87.5% of the total respondents, during the basic cycle, which corresponds to the initial two years of the medical undergraduate program and is the propaedeutic period for subsequent disciplines. During this period, sleep physiology and basic mechanisms were mentioned to a considerable extent, but the neuroanatomic substrates of sleep and wakefulness, as well as determinants of daytime sleepiness were covered only to a limited extent, indicating a deficiency in the teaching of sleep fundamentals. In the first two introductory years, important topics include sleep physiology, chronobiology of sleep and wakefulness, neuroautonomic substrates of sleep, dream studies, respiratory sleep parameters, sleep-related history and physical examination, followed by the pathophysiology of sleep disorders . Consequently, during the clinical cycle, which extends over the two years following the basic cycle, students are engaged in clinical diagnostic and therapeutic reasoning through case studies. At this stage, only studies related to sleep investigation and diagnosis, as well as sleep disorders in children were mentioned. The data reveal a significant gap in the teaching of these subjects, which could lead to underdiagnosed patients and potentially erroneous treatment for non-sleep-related disorders in the future . In this context, the years of clinical training offer opportunities to integrate sleep-related topics, given that patients with sleep disorders present a range of symptoms, and these symptoms may reflect underlying primary disorders; for example, insomnia may manifest as a symptom of depression . Internship is a mandatory cycle during which students undergo hospital and outpatient rotations in major medical areas such as internal medicine, surgery, gynecology and obstetrics, pediatrics, public health, and mental health. During this period, there is a greater emphasis on studies related to sleep medicine, particularly in the field of psychiatry. This might imply a bias toward a psychiatric diagnosis; however, it is important to acknowledge that sleep disorders can arise from metabolic, cardiovascular, neurological, immunological, or social factors . During these rotations, it is essential to provide students with guidance on diagnostic methods for sleep disorders, and consideration could be given to developing an elective module in sleep medicine to offer an intensive experience in the field . Considering the population exposed to the topic during undergraduate studies, there was limited time dedicated to the study of sleep-related subjects . In line with this, a study involving 12 countries (Australia, India, Indonesia, Japan, Malaysia, New Zealand, Singapore, South Korea, Thailand, United States, Canada, and Vietnam) reported that the average amount of time spent on sleep education is slightly under 2.5 h, with 27% reporting no sleep education in their medical schools . Similarly, less than 2 h are allocated to teaching sleep and sleep disorders at 126 medical schools in the USA . Although sleep medicine teaching for undergraduate students in the UK has increased to an average of 1.5 h, a six-fold improvement compared to 1988, it is still considered insufficient . In contrast, a recent study involving final-year medical students from seven Lebanese medical schools reported that higher scores on the ASKME were associated with sleep medicine education in the medical school curriculum . This is supported by the Institute of Medicine (IOM) report, recommending that exposure to sleep medicine should begin before entering residency and be integrated early into medical school curricula . In this way, understanding the rhythms related to the functions being assessed, especially the sleep-wake cycle, will assist the clinicians in their practice, enabling them to be more attentive to complaints involving their patients' rhythms. This expanded view provides the benefit of enlightening and guiding the patient about their life and work routines, especially regarding sleep disorders . In addition to the benefits to the doctor-patient relationship, education in sleep medicine at the undergraduate level contributes to medical students' understanding of the health-disease process, particularly concerning chronobiological aspects and sleep health . Various studies have reported an association between poor sleep quality and both low academic performance and mental health in medical students . Additionally, a well-documented correlation exists between sleep disorders and suicidal behavior in both young people and adults . The topic is likewise not a priority within other academic disciplines. A study conducted by our group involving 1,384 psychologists in Brazil, revealed a lack of familiarity with the term “chronobiology” and other biological rhythms beyond the sleep-wake cycle . Similarly, physiotherapists receive limited education on sleep , as do dentists, where education in dental sleep medicine in academic institutions in the USA and Canada faces numerous obstacles . In Brazil, concerning the general population, educational exposure to sleep-related subjects is scarce, especially considering the high prevalence of poor sleep quality associated with a lack of circadian and sleep hygiene practices . The examination of data in this study provides insights into the distribution of content related to chronobiology and sleep medicine across undergraduate programs in Brazilian medical schools. The majority of respondents acquired knowledge in this field when exposed to the subject in three distinct cycles of their undergraduate medical education. Among this cohort, a significant variation in the number of correct answers was observed compared to students who encountered the subject at only one or two cycles or had no exposure at all. Upon conducting a more detailed analysis of the correct answers, it is crucial to prioritize questions that displayed considerable variation, both positively and negatively. A thorough examination of responses indicated that question 3 showed a significant error rate (82.08%). This particular question is about the recognition that pre-adolescents and adolescents require more sleep and often have a vespertine chronotype, potentially leading to sleep deprivation due to early morning school schedules . The same question, when answered by medical students from Saudi Arabia and Lebanon, yielded error rates of approximately 50 and 65%, respectively . This disparity suggests that Brazilian students may have more limited knowledge of specific topics. Although the continuity in exposure allows for the gradual consolidation of knowledge over time, enabling students to progressively deepen their understanding and skills, this finding contrasted with a significant number of schools that report no structured teaching time in this field, and only a minimal percentage of medical students receive training in sleep laboratory procedures or participate in the clinical evaluation of sleep-disordered patients . The data is in line with the notion that multiple exposures, as opposed to a single exposure, are more effective in facilitating a comprehensive understanding and application of knowledge, as demonstrated by the study of Marinopoulos et al. on the effectiveness of continuing medical education. The aforementioned observation also aligns with the conclusions from a comprehensive survey of USA medical schools, where significant impediments were reported. The identified obstacles encompassed the inadequacy of qualified faculty, limited curriculum time, and a demand for additional clinical and educational resources in the realm of sleep and sleep disorders education , as also declared by the students in the present study. This study had several limitations. The low response rate may limit the generalizability of the findings, influenced by individual recruitment and social network invitations. To address the low response rate, we reduced the original ASKME questionnaire from thirty to ten general questions, which may have introduced bias. Additionally, the lack of a validated Brazilian Portuguese version of the ASKME survey may impact its accuracy in reflecting the cultural and educational context of Brazilian medical students. The study also lacks curriculum evaluations and data on time allocated to the field, particularly given diverse approaches such as problem-based learning (PBL), which is organized into thematic modules across institutions. Furthermore, including only students from the final two years of medical school may have affected the analysis, as some students may still be in the early stages of their final years without complete exposure to chronobiology and sleep medicine, potentially influencing their knowledge level and response accuracy. Lastly, the absence of questions about self-directed study limits insights into the impact of independent learning on knowledge perception. These limitations suggest caution in interpreting the results and underscore the need for more comprehensive future research. In conclusion, the study can serve as a cornerstone for future research aiming to expand the study of sleep medicine and chronobiology in medical schools by providing an overview of the Brazilian situation regarding the teaching of these topics. The strengths of the study include the use of a validated instrument (ASKME) for assessing sleep medicine knowledge and the inclusion of participants from both public and private medical education systems in the country, covering all 27 Brazilian states. The study provides insight into a previously unknown scenario regarding the teaching of sleep medicine in Brazilian medical schools. |
Assessing the factors affecting the accessibility of primary dental care for people with haemophilia | bec2b67a-e0db-4b5f-80d2-209ce72295b6 | 11780182 | Dentistry[mh] | INTRODUCTION Oral health is an important aspect of quality of life (QoL) in general and in particular in patients with bleeding disorders (BD). , Dental care of patients with haemophilia (PWH) represents a largely unmet need of their comprehensive management program. , , , , , In addition to previously existing difficulties, patients had to face challenges due to the outbreak of coronavirus type‐2 (CoV‐2) disease (COVID‐19) pandemic since 2019. , , The obstacles to obtaining adequate dental care can result in poor oral hygiene, thereby increasing the necessity for more invasive dental treatments. Prevention and early detection of dental diseases is of paramount importance among PWH, as most common non‐surgical procedures can be performed in a general dental practice (GDP) provided that a haematologist is involved, and guidelines are followed. , , Only a few groups from the Americas and the United Kingdom investigated the access to dental care for PWH. , , , , , No survey on dental care experience of PWH has yet been published from continental Europe. The aim of this study was to extend the research on the accessibility of dental care for PWH, with a particular focus on primary dental care. We also aimed at collecting data on patients’ perceptions of how COVID‐19 pandemic has affected access to dental treatments. MATERIALS AND METHODS 2.1 Study design This multicentre cross‐sectional study was performed between July and December 2022 in four major Hungarian haemophilia treatment centres (HTCs) (National Hemophilia Center and Hemostasis Department, Medical Center of the Hungarian Defense Forces—Budapest; Heim Pal Children's Hospital—Budapest; Clinical Center of the University of Debrecen—Debrecen; Mohács Hospital—Mohács). Inclusion and exclusion criteria are shown in Table . We used a self‐administered anonymous questionnaire developed for this study based on literature review and expert opinion of the senior dentist and haematologist (IM and CK). , , , , , , Children under 16 years of age completed the questionnaire with the help of their guardians. A pilot test of the questionnaire was carried out with 10 PWH (eight adults, two children) from the Debrecen centre. Ethical approval was obtained (Regional and Institutional Ethics Committee, Clinical Center, University of Debrecen; No. DE RKEB/IKEB: 6087‐2022). Study was conducted in compliance with the Helsinki Declaration. 2.2 Sample population and data collection Study participants were recruited from the participating HTCs. Participation was voluntary. Written informed consent was obtained from study patients and/or by legal guardians. A priori sample size calculation, incorporating a 10% allowance for dropout and nonresponse, determined the need for 58 participants (adjusted to 64) to detect a difference with means of 64.01 and 82.43, standard deviations (SD) of 26.26 and 22.85, respectively, with 80% power and a two‐sided alpha of .05. During data collection period, 80 patients were invited, and 68 patients enrolled (response rate: 85%). The questionnaires were completed in HTCs ( n = 63; 93%) or online via a Microsoft Forms link ( n = 5; 7%). 2.3 Variable specification The questionnaire contained 30 questions (Table ) and was divided into four sections (Table ). In regard to the level of dental care, the statistical analysis was based on the Hungarian healthcare system, and three age groups were delineated (Table ). Haemophilia severity defined conventionally by factor VIII and IX levels. Patients with severe or moderate haemophilia and those with mild haemophilia were assessed separately. Inhibitors were categorized conventionally by Bethesda titer (BU/mL). 2.4 Variable selection and statistical analysis Categorical variables were compared using Pearson's Chi‐squared test. Multiple logistic regression models were built to investigate the factors influencing the frequency of visits to the dentist, refusal of dental treatment and patients’ views on the dental care options for PWH. To enhance the robustness of our multiple logistic regression models and ensure the selection of the most predictive variables, we employed the Least Absolute Shrinkage and Selection Operator (LASSO) regression technique. Findings from the logistic regression analyses were represented as odds ratios (ORs) and 95% confidence intervals (CIs). Statistical evaluations were executed using STATA IC Version 17.0 software. A p ‐value < .05 was considered significant. Study design This multicentre cross‐sectional study was performed between July and December 2022 in four major Hungarian haemophilia treatment centres (HTCs) (National Hemophilia Center and Hemostasis Department, Medical Center of the Hungarian Defense Forces—Budapest; Heim Pal Children's Hospital—Budapest; Clinical Center of the University of Debrecen—Debrecen; Mohács Hospital—Mohács). Inclusion and exclusion criteria are shown in Table . We used a self‐administered anonymous questionnaire developed for this study based on literature review and expert opinion of the senior dentist and haematologist (IM and CK). , , , , , , Children under 16 years of age completed the questionnaire with the help of their guardians. A pilot test of the questionnaire was carried out with 10 PWH (eight adults, two children) from the Debrecen centre. Ethical approval was obtained (Regional and Institutional Ethics Committee, Clinical Center, University of Debrecen; No. DE RKEB/IKEB: 6087‐2022). Study was conducted in compliance with the Helsinki Declaration. Sample population and data collection Study participants were recruited from the participating HTCs. Participation was voluntary. Written informed consent was obtained from study patients and/or by legal guardians. A priori sample size calculation, incorporating a 10% allowance for dropout and nonresponse, determined the need for 58 participants (adjusted to 64) to detect a difference with means of 64.01 and 82.43, standard deviations (SD) of 26.26 and 22.85, respectively, with 80% power and a two‐sided alpha of .05. During data collection period, 80 patients were invited, and 68 patients enrolled (response rate: 85%). The questionnaires were completed in HTCs ( n = 63; 93%) or online via a Microsoft Forms link ( n = 5; 7%). Variable specification The questionnaire contained 30 questions (Table ) and was divided into four sections (Table ). In regard to the level of dental care, the statistical analysis was based on the Hungarian healthcare system, and three age groups were delineated (Table ). Haemophilia severity defined conventionally by factor VIII and IX levels. Patients with severe or moderate haemophilia and those with mild haemophilia were assessed separately. Inhibitors were categorized conventionally by Bethesda titer (BU/mL). Variable selection and statistical analysis Categorical variables were compared using Pearson's Chi‐squared test. Multiple logistic regression models were built to investigate the factors influencing the frequency of visits to the dentist, refusal of dental treatment and patients’ views on the dental care options for PWH. To enhance the robustness of our multiple logistic regression models and ensure the selection of the most predictive variables, we employed the Least Absolute Shrinkage and Selection Operator (LASSO) regression technique. Findings from the logistic regression analyses were represented as odds ratios (ORs) and 95% confidence intervals (CIs). Statistical evaluations were executed using STATA IC Version 17.0 software. A p ‐value < .05 was considered significant. RESULTS 3.1 Patient demographic and disease characteristics Demographic data and disease characteristics are presented in Table . 3.2 Descriptive and multivariate analyses of access and quality of dental care 3.2.1 Frequency of visits to dental practice Severe and moderate versus mild PWH were compared to investigate the effect of haemophilia severity on frequency of dental visits (Table ). Results indicated that a significantly higher proportion of patients with mild haemophilia visited a dentist in the preceding year than patients with severe or moderate haemophilia ( p = .026). Furthermore, age had a significant effect on the frequency of dental visits ( p = .033) (Table ). The data revealed a markedly higher frequency of dental visits among individuals within the 0–18 age group (51.35%). Results indicated that comorbidities and negative experiences of refusal by dentists did not significantly impact the frequency of visits (Table ). A multiple logistic regression model based on LASSO selection was used to examine the effect of a permanent dentist, oral hygiene consultation attendance, age and type of haemophilia on the frequency of visiting a dental office. Participants having a permanent dentist ( n = 37; 54%) had higher odds of visiting a dental practice (OR: 9.95, 95% CI: 2.86–34.62). Moreover, patients who have ever attended an oral hygiene consultation ( n = 28; 41%) had higher odds of visiting a dental office (OR: 3.84, 95% CI: 1.09–13.58), mostly “in the workplace/school” ( n = 15; 54%) or “in a private dental practice” ( n = 11; 39%). The analysis revealed that the age and type of haemophilia did not have a significant impact on the likelihood of visiting the dentist (OR: 1.01, 95% CI: .97–1.05 and OR: .20, 95% CI: .02–1.85). Most participants not attending a dentist in the previous year denied any dental problems ( n = 24; 77%) (Table ). The second most frequent response was delayed treatments due to the COVID‐19 epidemic ( n = 5; 16%). Impact of COVID‐19 pandemic on access to dental care is shown in Table . 3.2.2 Refusal by dental care‐providers Fourteen (21%) respondents have already been refused at least once because of their BD (Table ). Twelve (86%) such events happened in primary dental care financed by the National Health Insurance Fund of Hungary. Pearson's Chi‐squared test revealed a statistically significant association between age group and the refusal of dental care ( p = .020) (Table ). Furthermore, refusal rate was found to be significantly higher in the presence of comorbidities ( p = .006). Neither the severity of haemophilia nor the availability of a regular dentist significantly affected the refusal rate (Table ). Sixty‐nine percent of respondents were completely ( n = 20) or mostly ( n = 27) satisfied with the dental care options for PWH. There was a significant difference ( p < .001) in the refusal by dental care‐providers between satisfied ( n = 47) and dissatisfied patients ( n = 7) (Table ). All dissatisfied patients were refused. A multiple logistic regression model based on LASSO selection was used to examine the effect of infectious disease, bleeding episodes following dental procedures and type of haemophilia on refusal of dental care. Those patients who have or ever had an infectious disease ( n = 29; 43%) had higher odds for refusal (OR: 4.48, 95% CI: 1.14–17.69). Patients who ever experienced bleeding or swelling after dental procedures ( n = 24; 35%) were also more likely to experience refusal (OR: 4.23; 95% CI: 1.10–16.27). In the course of the questionnaire survey, three (4%) respondents concealed their infectious disease for fear of rejection, however, none of the respondents concealed their haemophilia. A multiple logistic regression model based on LASSO selection was used to examine the effect of oral hygiene consultation attendance, type of haemophilia, infectious disease and bleeding episodes following dental treatment on patients’ assessment of their dental care options. Participants who had ever attended an oral hygiene consultation ( n = 28; 41%) had higher odds of being satisfied with the dental care options for PWH (OR: 6.28, 95% CI: .71–55.88). Patients with severe and moderate haemophilia, infectious disease, or experienced bleeding signs or swelling after a dental procedure were characterized by lower odds of satisfaction (OR: .60, 95% CI: .04–8.94, OR: .07, 95% CI: .01–.89 and OR: .13, 95% CI: .02–1.03, respectively). 3.2.3 Dental complaints and treatments Table presents a summary of patients’ responses to questions regarding dental complaints and treatments. The majority of respondents reported no dental complaints ( n = 58; 85%). Fifty‐four (79%) participants had some dental procedure, with the most common interventions being restorative treatments. Twenty‐four (35%) respondents experienced prolonged oozing, bleeding, or swelling after a dental treatment. Most bleeding occurred after surgical procedures ( n = 17). In case of dental pain, the majority ( n = 41; 60%) of participants would consult a dentist first. If patients had a choice of any level of care for dental procedures, 15% of the participants would choose a treatment to be performed by GDP, while most (65%) of them would go to a private practice. 3.2.4 Knowledge on dental care options and patients’ evaluation of dentists The majority of participants ( n = 45; 66%) considered themselves to be mostly informed about the dental care options available for PWH (Table ). Patients who were fully informed ( n = 9; 13%), expressed satisfaction. Moreover, the majority of participants in both the mostly informed and uninformed ( n = 14; 21%) groups reported no significant obstacles to accessing dental care. A total of 14 parents of 25 children (56%) with haemophilia were unaware that the first dental check‐up is recommended before the age of 1 year. The survey respondents reported that dentists were fully ( n = 7) or mostly prepared ( n = 13) for the management of PWH (together, 29%). None of the participants considered dentists unprepared, although 34 (50%) of them said dentists know the disease, but their knowledge was very incomplete. Patient demographic and disease characteristics Demographic data and disease characteristics are presented in Table . Descriptive and multivariate analyses of access and quality of dental care 3.2.1 Frequency of visits to dental practice Severe and moderate versus mild PWH were compared to investigate the effect of haemophilia severity on frequency of dental visits (Table ). Results indicated that a significantly higher proportion of patients with mild haemophilia visited a dentist in the preceding year than patients with severe or moderate haemophilia ( p = .026). Furthermore, age had a significant effect on the frequency of dental visits ( p = .033) (Table ). The data revealed a markedly higher frequency of dental visits among individuals within the 0–18 age group (51.35%). Results indicated that comorbidities and negative experiences of refusal by dentists did not significantly impact the frequency of visits (Table ). A multiple logistic regression model based on LASSO selection was used to examine the effect of a permanent dentist, oral hygiene consultation attendance, age and type of haemophilia on the frequency of visiting a dental office. Participants having a permanent dentist ( n = 37; 54%) had higher odds of visiting a dental practice (OR: 9.95, 95% CI: 2.86–34.62). Moreover, patients who have ever attended an oral hygiene consultation ( n = 28; 41%) had higher odds of visiting a dental office (OR: 3.84, 95% CI: 1.09–13.58), mostly “in the workplace/school” ( n = 15; 54%) or “in a private dental practice” ( n = 11; 39%). The analysis revealed that the age and type of haemophilia did not have a significant impact on the likelihood of visiting the dentist (OR: 1.01, 95% CI: .97–1.05 and OR: .20, 95% CI: .02–1.85). Most participants not attending a dentist in the previous year denied any dental problems ( n = 24; 77%) (Table ). The second most frequent response was delayed treatments due to the COVID‐19 epidemic ( n = 5; 16%). Impact of COVID‐19 pandemic on access to dental care is shown in Table . 3.2.2 Refusal by dental care‐providers Fourteen (21%) respondents have already been refused at least once because of their BD (Table ). Twelve (86%) such events happened in primary dental care financed by the National Health Insurance Fund of Hungary. Pearson's Chi‐squared test revealed a statistically significant association between age group and the refusal of dental care ( p = .020) (Table ). Furthermore, refusal rate was found to be significantly higher in the presence of comorbidities ( p = .006). Neither the severity of haemophilia nor the availability of a regular dentist significantly affected the refusal rate (Table ). Sixty‐nine percent of respondents were completely ( n = 20) or mostly ( n = 27) satisfied with the dental care options for PWH. There was a significant difference ( p < .001) in the refusal by dental care‐providers between satisfied ( n = 47) and dissatisfied patients ( n = 7) (Table ). All dissatisfied patients were refused. A multiple logistic regression model based on LASSO selection was used to examine the effect of infectious disease, bleeding episodes following dental procedures and type of haemophilia on refusal of dental care. Those patients who have or ever had an infectious disease ( n = 29; 43%) had higher odds for refusal (OR: 4.48, 95% CI: 1.14–17.69). Patients who ever experienced bleeding or swelling after dental procedures ( n = 24; 35%) were also more likely to experience refusal (OR: 4.23; 95% CI: 1.10–16.27). In the course of the questionnaire survey, three (4%) respondents concealed their infectious disease for fear of rejection, however, none of the respondents concealed their haemophilia. A multiple logistic regression model based on LASSO selection was used to examine the effect of oral hygiene consultation attendance, type of haemophilia, infectious disease and bleeding episodes following dental treatment on patients’ assessment of their dental care options. Participants who had ever attended an oral hygiene consultation ( n = 28; 41%) had higher odds of being satisfied with the dental care options for PWH (OR: 6.28, 95% CI: .71–55.88). Patients with severe and moderate haemophilia, infectious disease, or experienced bleeding signs or swelling after a dental procedure were characterized by lower odds of satisfaction (OR: .60, 95% CI: .04–8.94, OR: .07, 95% CI: .01–.89 and OR: .13, 95% CI: .02–1.03, respectively). 3.2.3 Dental complaints and treatments Table presents a summary of patients’ responses to questions regarding dental complaints and treatments. The majority of respondents reported no dental complaints ( n = 58; 85%). Fifty‐four (79%) participants had some dental procedure, with the most common interventions being restorative treatments. Twenty‐four (35%) respondents experienced prolonged oozing, bleeding, or swelling after a dental treatment. Most bleeding occurred after surgical procedures ( n = 17). In case of dental pain, the majority ( n = 41; 60%) of participants would consult a dentist first. If patients had a choice of any level of care for dental procedures, 15% of the participants would choose a treatment to be performed by GDP, while most (65%) of them would go to a private practice. 3.2.4 Knowledge on dental care options and patients’ evaluation of dentists The majority of participants ( n = 45; 66%) considered themselves to be mostly informed about the dental care options available for PWH (Table ). Patients who were fully informed ( n = 9; 13%), expressed satisfaction. Moreover, the majority of participants in both the mostly informed and uninformed ( n = 14; 21%) groups reported no significant obstacles to accessing dental care. A total of 14 parents of 25 children (56%) with haemophilia were unaware that the first dental check‐up is recommended before the age of 1 year. The survey respondents reported that dentists were fully ( n = 7) or mostly prepared ( n = 13) for the management of PWH (together, 29%). None of the participants considered dentists unprepared, although 34 (50%) of them said dentists know the disease, but their knowledge was very incomplete. Frequency of visits to dental practice Severe and moderate versus mild PWH were compared to investigate the effect of haemophilia severity on frequency of dental visits (Table ). Results indicated that a significantly higher proportion of patients with mild haemophilia visited a dentist in the preceding year than patients with severe or moderate haemophilia ( p = .026). Furthermore, age had a significant effect on the frequency of dental visits ( p = .033) (Table ). The data revealed a markedly higher frequency of dental visits among individuals within the 0–18 age group (51.35%). Results indicated that comorbidities and negative experiences of refusal by dentists did not significantly impact the frequency of visits (Table ). A multiple logistic regression model based on LASSO selection was used to examine the effect of a permanent dentist, oral hygiene consultation attendance, age and type of haemophilia on the frequency of visiting a dental office. Participants having a permanent dentist ( n = 37; 54%) had higher odds of visiting a dental practice (OR: 9.95, 95% CI: 2.86–34.62). Moreover, patients who have ever attended an oral hygiene consultation ( n = 28; 41%) had higher odds of visiting a dental office (OR: 3.84, 95% CI: 1.09–13.58), mostly “in the workplace/school” ( n = 15; 54%) or “in a private dental practice” ( n = 11; 39%). The analysis revealed that the age and type of haemophilia did not have a significant impact on the likelihood of visiting the dentist (OR: 1.01, 95% CI: .97–1.05 and OR: .20, 95% CI: .02–1.85). Most participants not attending a dentist in the previous year denied any dental problems ( n = 24; 77%) (Table ). The second most frequent response was delayed treatments due to the COVID‐19 epidemic ( n = 5; 16%). Impact of COVID‐19 pandemic on access to dental care is shown in Table . Refusal by dental care‐providers Fourteen (21%) respondents have already been refused at least once because of their BD (Table ). Twelve (86%) such events happened in primary dental care financed by the National Health Insurance Fund of Hungary. Pearson's Chi‐squared test revealed a statistically significant association between age group and the refusal of dental care ( p = .020) (Table ). Furthermore, refusal rate was found to be significantly higher in the presence of comorbidities ( p = .006). Neither the severity of haemophilia nor the availability of a regular dentist significantly affected the refusal rate (Table ). Sixty‐nine percent of respondents were completely ( n = 20) or mostly ( n = 27) satisfied with the dental care options for PWH. There was a significant difference ( p < .001) in the refusal by dental care‐providers between satisfied ( n = 47) and dissatisfied patients ( n = 7) (Table ). All dissatisfied patients were refused. A multiple logistic regression model based on LASSO selection was used to examine the effect of infectious disease, bleeding episodes following dental procedures and type of haemophilia on refusal of dental care. Those patients who have or ever had an infectious disease ( n = 29; 43%) had higher odds for refusal (OR: 4.48, 95% CI: 1.14–17.69). Patients who ever experienced bleeding or swelling after dental procedures ( n = 24; 35%) were also more likely to experience refusal (OR: 4.23; 95% CI: 1.10–16.27). In the course of the questionnaire survey, three (4%) respondents concealed their infectious disease for fear of rejection, however, none of the respondents concealed their haemophilia. A multiple logistic regression model based on LASSO selection was used to examine the effect of oral hygiene consultation attendance, type of haemophilia, infectious disease and bleeding episodes following dental treatment on patients’ assessment of their dental care options. Participants who had ever attended an oral hygiene consultation ( n = 28; 41%) had higher odds of being satisfied with the dental care options for PWH (OR: 6.28, 95% CI: .71–55.88). Patients with severe and moderate haemophilia, infectious disease, or experienced bleeding signs or swelling after a dental procedure were characterized by lower odds of satisfaction (OR: .60, 95% CI: .04–8.94, OR: .07, 95% CI: .01–.89 and OR: .13, 95% CI: .02–1.03, respectively). Dental complaints and treatments Table presents a summary of patients’ responses to questions regarding dental complaints and treatments. The majority of respondents reported no dental complaints ( n = 58; 85%). Fifty‐four (79%) participants had some dental procedure, with the most common interventions being restorative treatments. Twenty‐four (35%) respondents experienced prolonged oozing, bleeding, or swelling after a dental treatment. Most bleeding occurred after surgical procedures ( n = 17). In case of dental pain, the majority ( n = 41; 60%) of participants would consult a dentist first. If patients had a choice of any level of care for dental procedures, 15% of the participants would choose a treatment to be performed by GDP, while most (65%) of them would go to a private practice. Knowledge on dental care options and patients’ evaluation of dentists The majority of participants ( n = 45; 66%) considered themselves to be mostly informed about the dental care options available for PWH (Table ). Patients who were fully informed ( n = 9; 13%), expressed satisfaction. Moreover, the majority of participants in both the mostly informed and uninformed ( n = 14; 21%) groups reported no significant obstacles to accessing dental care. A total of 14 parents of 25 children (56%) with haemophilia were unaware that the first dental check‐up is recommended before the age of 1 year. The survey respondents reported that dentists were fully ( n = 7) or mostly prepared ( n = 13) for the management of PWH (together, 29%). None of the participants considered dentists unprepared, although 34 (50%) of them said dentists know the disease, but their knowledge was very incomplete. DISCUSSION This survey is the first study reporting on access to dental care for PWH in continental Europe and one of the very few studies reporting on the impact of COVID‐19 on dental treatments for people with BD. The study paid particular attention to primary care. To be treated in a specialized care centre, patients face many obstacles, such as travel distance or long waiting times for treatments. In addition to haemophilia, these barriers place an additional burden on patients, which can put dental care at the bottom of the priority list. Fortunately, most of the participants (54%) of this study had visited a dentist at least once in the previous year, similar to the findings of other investigators. , According to our data, severity of haemophilia exerted a significant impact on frequency of dental visits among patients. Possible explanation is that patients at a higher risk of bleeding encounter greater difficulties in finding dentists willing to provide them with care as suggested by the results of Frusca do Monte et al. Nevertheless, according to our results, dentists who actually treated the patients were not influenced by the severity of haemophilia. Similar to Fiske et al., we demonstrated that children (0–18 years of age) were more likely to attend dental appointments. This observation highlights the need for oral hygiene education at an early age and making children familiar with dental visits, which can alleviate anxiety and promote oral health. , The survey found a high rate of dental treatment refusal among participants. Twenty‐one percent of patients were refused at least once by a dentist because of haemophilia and 86% of the refused participants had been rejected by GDP. However, primary care would give patients faster access to treatments that do not require advanced level of care. The refusal rate was similar to that found by three groups from the US and UK varying between 18% and 29%. , , A study from Saudi Arabia reported a much higher refusal rate (67%). Prevalence of refusal was higher among older individuals similar to Frusca do Monte et al. As adults undergo a greater number of dental examinations over their lifespan than children and adolescents they are more likely to experience refusal. Moreover, the complexity of dental treatment needs and medical comorbidities are greater among older individuals, which presents a challenge in providing appropriate care. Our findings corroborate other studies that having demonstrated a negative impact of current or previous infectious diseases on access to dental treatments for PWH. , Notably, 4% of the respondents concealed their infectious disease for fear of rejection. Comorbidities also increased hesitancy of dentists to treat PWH as well as previously noticed bleeding signs following dental procedures. Present data confirmed observations of an US survey, suggesting that finding care providers willing to treat patients with BD is a major hurdle. Participants who had a permanent dentist had higher odds of visiting a dental practice, than those who did not. Patients may find difficult to access dental service because many dentists have limited confidence in treating PWH. Furthermore, there are notable discrepancies in the equipment and facilities utilized in dental practices. Twenty‐four (77%) of all the participants who did not attend a dentist in the previous year denied any dental problems. Other difficulties, identified as barriers, have also been reported previously (Table ). , , , , The outbreak of COVID‐19 and its impact on the health sector has created new difficulties. In our survey, despite the challenges encountered, 84% of patients reporting obstacles still visited their dentist at least once between 2021 and 2022. However, 16% of respondents went for only the most necessary treatments, even though avoiding dental problems should have been a priority for this group of patients. A large proportion of respondents (32%) were unable to assess the impact of COVID‐19 on dental care, probably because 86% had not visited a dentist in the preceding year. Fifty percent of participants said dentists were aware of haemophilia, but their knowledge was incomplete. Similarly, Frusca do Monte et al. observed that 53% of patients with BD lacked confidence in the GDP's dental care ability. In contrast, Fiske et al. and Kalsi et al. observed lower rates. , It is of the utmost importance to emphasize the significance of dental care for PWH and to provide assistance to dentists in this regard. It is recommended that the curriculum at the undergraduate level should place an accent on the care of patients with BD, and that this topic should continue to be addressed in postgraduate training. Furthermore, a comprehensive guide should be made readily available to healthcare professionals in their native language. The work of Hungarian dentists is supported by contents in Hungarian covering all fields of dentistry. , Furthermore, consultation channels need to be established with HTCs where dentists can easily access information. According to our data, in case of dental pain 31% of the participants would consult a haematologist for help, which highlights the importance of close cooperation between the two professions. Supporting dental care and prevention for PWH can be of paramount importance, as our data show that those who have ever attended an oral hygiene consultation are more likely to visit a dental office. Forty‐one percent of participants indicated that they had received such advice, a low proportion compared to the findings from a previous study (76% and 85% of adults and children, respectively). In accordance with the recommendation of the World Federation of Haemophilia (WFH), preventative care should commence at an early age, ideally with the eruption of the first tooth. Our survey showed that 56% of parents of children with haemophilia did not know about this guideline. Otherwise, majority of respondents felt that they were mostly informed about dental care options for PWH and completely or mostly satisfied with the dental care options. The importance of prevention is also shown by the fact that participants who had ever attended an oral hygiene consultation had higher odds to being satisfied with their dental care. Seventy‐nine percent of the participants had dental procedure and the majority of these were restorative treatments. Most dental procedures (65%) were not associated with bleeding complications, which were more commonly reported after surgical procedures. Therefore, dentists can play a key role, even in the early detection of haemophilia. According to the study by Sonis and Musselman 13.6% of the 132 participants with haemophilia were diagnosed because of a persisting oral bleeding. We observed a rate of 7.3%. The study emphasized the significance of patients visiting the dentist on a regular basis and receiving appropriate oral hygiene advice during dental visits. The most direct access dental treatment is through primary dental care. However, our survey indicated that this was the least preferred option for patients, presumably due to the high refusal rate. Nevertheless, we believe that dentists working in primary care have an important role to play and can help alleviate the dental care difficulties of PWH. The strength of the survey is that it was collected in the largest HTCs in the country, and 93% were completed in the dental office. The small number of patients is a limitation of the study. An international study of larger sample size comparing findings across countries would give a more accurate picture of factors influencing patients’ access to dental care services. Secondly, a limitation of the questionnaire was the absence of open text domains, which compelled participants to select from predefined options. This may have resulted in a lack of comprehensive representation of their full range of thoughts and experiences. Furthermore, the questionnaire was based on self‐report, which can lead to distortions of social desirability and errors in recall. In addition, patients invited to participate in the study were randomly selected from the total haemophilia population of the HTCs. The voluntary participation of these patients can lead to the selection of a high‐quality cohort. CONCLUSION This study identified obstacles in the oral care of PWH, evaluating the COVID‐19 epidemic as a new barrier. In conclusion, supporting dental care for PWH is of paramount importance, with particular emphasis on primary care and importance of a permanent dentist so as a fast and simple access to dental care can be available. However, our survey shows that in most cases, PWH are rejected by GDPs. In addition, COVID‐19 pandemic has exaggerated the difficulties of PWH in accessing dental care. The authors stated that they had no interests which might be perceived as posing a conflict or bias. This paper was approved by the Regional and Institutional Ethics Committee, Clinical Center, University of Debrecen, Hungary (registration No DE RKEB/IKEB: 6087‐2022). The study has been conducted in full compliance with the Declaration of Helsinki. Written informed consent was signed by participating patients and/or by their legal guardians. Supporting information |
Synoptic Reporting in Clinical Placental Pathology: A Preliminary Investigation Into Report Findings and Interobserver Agreement | f4a74cd3-6aa6-4965-8ee8-ceb315a7d8ee | 10559645 | Pathology[mh] | The placenta is the critical organ of pregnancy, regulating fetal growth and development and modulating maternal adaptations during pregnancy to support the developing fetus. Due to these fundamental roles, healthy placental developmental and function are vital for optimal outcomes of both mother and fetus/infant. Adverse pregnancy outcomes such as preterm birth, preeclampsia, fetal growth restriction, and stillbirth are leading causes of maternal and fetal/neonatal mortality and morbidity worldwide. - Moreover, these complications are linked to a number of insults and/or exposures that disrupt placental structure and function, such as infection, underlying maternal morbidities (i.e., hyperglycemia), abnormal vascular development, and immunomodulatory aberrations. - Placental health can be assessed following delivery by gross and histopathological examination of placenta, providing insight into potential etiologies of these adverse pregnancy outcomes, immediate and long-term impacts to maternal and neonatal health and potential recurrence risks. , In this regard, placental pathology has a critical role in the continuum of care for mothers and their infants. As in other pathology specialties, issues in standardization, reporting practices and clinical translation are recognized limitations in the field of placental pathology. - Recent efforts to improve the quality and robustness of placental pathology in practice include the development of international consensus guidelines, such as the Amsterdam criteria, for lesion definitions and severity criteria, recommendations for standardized gross examination and uniform approaches for placental submission to Pathology. - Despite these efforts, lack of standardized reporting practices yielding potentially incomplete and biased placental evaluations remains a current problem. To improve and advance this important clinical modality, a synoptic reporting approach in which a line-by-line evaluation of placental lesions is employed may increase the completeness and limit bias in the evaluation of histopathology lesions, as demonstrated in the field of oncologic pathology. Synoptic reporting has become widespread in the field of oncopathology, increasing the quality and completeness of pathology reporting and allowing for the creation of uniform, multi-center databases that can be leveraged for large-scale research endeavors. - Recently, our group developed a novel synoptic report for placental pathology based on current literature and practice guidelines, as an extension of Amsterdam consensus criteria. , Our long-term goal in the development of this synoptic report is to guide the implementation of the Amsterdam consensus criteria into clinical practice and take initial steps in creating robust databases in placental pathology for large-scale analysis to explore clinical significance of a wide range of placental lesions. As first steps to the implementation of this synoptic tool in clinical practice, we conducted an internal audit of this synoptic report. Our objectives for the current study were 2-fold, we sought to: (1) evaluate and compare the use of the synoptic report to historical narrative reporting of placenta cases, and (2) assess interobserver agreement regarding lesion presence and severity between senior perinatal pathologists and resident pathologists. These 2 objectives were undertaken to both compare/contrast the type of information captured when using traditional narrative reporting vs proposed synoptic reporting, and to determine the similarity in data captured using this synoptic reporting tool when applied by users with different experiential and training backgrounds. Collectively, both pieces of information are needed for consideration prior to moving forward with the implementation of such a tool in either a clinical or research setting. This was a retrospective cohort study of archived placenta pathology examination reports and accompanying histopathology tissue sections of placentas submitted to the Department of Pathology (Children’s Hospital of Eastern Ontario, Ontario, Canada) between 2013 and 2014. This study was approved by the Children’s Hospital of Eastern Ontario (CHEO) Research Ethics Board (REB#15/19X). Case Selection and Retrospective Review of Historical Reports Placentas sent to the Department of Pathology between October 1, 2013 and December 31, 2014 were randomly selected for inclusion in the study using a random number generator of uniquely assigned patient study numbers. During this time period, approximately 2200 placentas were received, and 100 placental cases from singleton pregnancy with a liveborn infant were selected for inclusion based on sample size calculation for clinical audits, accepting a 10% inaccuracy due to sampling. Cases were excluded if the gestational age at delivery was not provided with the pathology requisition. Historical pathology reports signed out by pediatric pathologists at CHEO were reviewed for demographic data (maternal history, infant sex and birthweight, pregnancy diagnosis at delivery) as well as gross anatomical findings and information was entered in a secure Redcap study database. For the retrospective review of historical placental pathology reports, each report was reviewed for histopathological findings noted by the original reporting pathologist. For each lesion indicated in the historical report, the severity description was recorded in a data collection form and included all descriptors (mild/moderate/severe; absent, etc). Placental Assessments With Synoptic Report Following review of the historical narrative report, accompanying H&E-stained placenta tissue slides were retrieved from the Eastern Ontario Regional Laboratory Association (EORLA) slide repository at CHEO. Representative tissue sections had been collected from the umbilical cord, fetal membranes and full-thickness tissue sections from each quadrant of the placenta according to EORLA standard operating procedures. Additional tissue blocks were collected when overt pathology was noted visually. Thus, each included case had a minimum of 6 tissue sections which were all reviewed in de novo fashion by the reporting pathologist and evaluated using the synoptic report. The synoptic report provides diagnostic and severity criteria for 32 distinct placental lesions categorized into 9 etiological categories (maternal vascular malperfusion, maternal decidual arteriopathy, implantation site abnormalities, ascending intrauterine infection, placenta villous maldevelopment, fetal vascular malperfusion, utero-placental separation, maternal-fetal interface disturbance, and chronic inflammation), largely based on Amsterdam consensus statement criteria, with the addition of other histopathological lesions of interest. For each lesion, a definition based on current literature and consensus guidelines , is included in the synoptic report, and the user is required to enter a semi-quantitative score based on the absence/presence and severity of each lesion (absent [score = 0]/present [score = 1], severity [score = 1–3]). A narrative text field at the end of the report allows for inclusion of additional findings. The histology slides of each case were independently examined by 2 experienced perinatal pathologists (DG and DED) using the synoptic report (see Supplemental Appendix A ). The pathologists were blinded to all clinical information (except for gestational age at delivery and placenta weight) and the historical pathology report. Gross placental findings were provided to the pathologists when needed in diagnosing microscopic lesions such as retroplacental adherent hematomas. Two anatomical pathology residents (AL, PGY3 at study conduction and JS, PGY5 at study conduction) reviewed the placental cases in the same manner as described above. The placentas selected for inclusion within this study (i.e., submitted to Pathology between 2013 and 2014) had initial historical reports created by the reporting pathologist prior to the publication and widespread implementation of Amsterdam consensus statement criteria. Thus, de novo examination of the placental slides with the proposed synoptic report acted as a method of objectively putting into practice the consensus statement criteria while additionally assessing other placental lesions of interest. Statistical Analysis Data were analyzed using Microsoft Excel 2010 for descriptive data and GraphPad QuickCalcs ( https://www.graphpad.com/quickcalcs/kappa1/ ) to quantify agreement with kappas which uses equations 18.16 to 18.20 from Fleiss, Statistical Methods for Rates & Proportions, 3rd edition. Descriptive data were expressed as means and standard deviations for normally distributed data or medians with interquartile ranges for non-normally distributed data. To compare the reported findings between the synoptic report and the historical narrative reports, the proportion of lesions not mentioned in the historical narrative report but indicated as a positive finding on the synoptic report was calculated, and vice versa. A post-hoc analysis was also completed for senior pathologists’ who participated in the study, to compare their diagnoses on the historic narrative report (DED, 26 cases and DG, 49 cases) to those that were found with the synoptic report. This data is presented in Supplemental Appendices 2 and 3 . Interobserver agreement between senior pathologists and between resident pathologists for each lesion was assessed using weighted kappa scores. Weighted kappa scores assume that categories are ordered and accounts for how far apart raters are, using linear weights. To assess agreement between the residents and the senior pathologists, non-weighted “binary” kappa scores were calculated. The scoring of placental slides by each lesion, completed by the resident pathologists was reviewed and compiled. A masterlist was created for the resident pathologists and for each placental case if any one, or both, of the residents indicated the lesion present, the lesion was noted to be present (i.e., =1). If both residents indicated the lesion was absent, it was given a score of 0 in the masterlist. This same process was applied to the scoring of placental lesions by senior pathologists. Kappa scores were calculated using the masterlist to assess level of agreement between resident and senior pathologists regarding the presence/absence of each distinct placental lesion included within the synoptic report. A similar non-weighted, post-hoc analysis was completed to compare each resident pathologist’s interobserver agreement to the senior pathologists. Kappa scores were interpreted as follows: <0.40 indicated poor agreement between reviewers, 0.41–0.75 indicated fair to good agreement, and values >0.75 were considered excellent agreement. Mean (SD) kappa scores were calculated for each category of placental lesions, stratified by the analyses stated above (senior pathologists, resident pathologists, and senior vs resident pathologists). Placentas sent to the Department of Pathology between October 1, 2013 and December 31, 2014 were randomly selected for inclusion in the study using a random number generator of uniquely assigned patient study numbers. During this time period, approximately 2200 placentas were received, and 100 placental cases from singleton pregnancy with a liveborn infant were selected for inclusion based on sample size calculation for clinical audits, accepting a 10% inaccuracy due to sampling. Cases were excluded if the gestational age at delivery was not provided with the pathology requisition. Historical pathology reports signed out by pediatric pathologists at CHEO were reviewed for demographic data (maternal history, infant sex and birthweight, pregnancy diagnosis at delivery) as well as gross anatomical findings and information was entered in a secure Redcap study database. For the retrospective review of historical placental pathology reports, each report was reviewed for histopathological findings noted by the original reporting pathologist. For each lesion indicated in the historical report, the severity description was recorded in a data collection form and included all descriptors (mild/moderate/severe; absent, etc). Following review of the historical narrative report, accompanying H&E-stained placenta tissue slides were retrieved from the Eastern Ontario Regional Laboratory Association (EORLA) slide repository at CHEO. Representative tissue sections had been collected from the umbilical cord, fetal membranes and full-thickness tissue sections from each quadrant of the placenta according to EORLA standard operating procedures. Additional tissue blocks were collected when overt pathology was noted visually. Thus, each included case had a minimum of 6 tissue sections which were all reviewed in de novo fashion by the reporting pathologist and evaluated using the synoptic report. The synoptic report provides diagnostic and severity criteria for 32 distinct placental lesions categorized into 9 etiological categories (maternal vascular malperfusion, maternal decidual arteriopathy, implantation site abnormalities, ascending intrauterine infection, placenta villous maldevelopment, fetal vascular malperfusion, utero-placental separation, maternal-fetal interface disturbance, and chronic inflammation), largely based on Amsterdam consensus statement criteria, with the addition of other histopathological lesions of interest. For each lesion, a definition based on current literature and consensus guidelines , is included in the synoptic report, and the user is required to enter a semi-quantitative score based on the absence/presence and severity of each lesion (absent [score = 0]/present [score = 1], severity [score = 1–3]). A narrative text field at the end of the report allows for inclusion of additional findings. The histology slides of each case were independently examined by 2 experienced perinatal pathologists (DG and DED) using the synoptic report (see Supplemental Appendix A ). The pathologists were blinded to all clinical information (except for gestational age at delivery and placenta weight) and the historical pathology report. Gross placental findings were provided to the pathologists when needed in diagnosing microscopic lesions such as retroplacental adherent hematomas. Two anatomical pathology residents (AL, PGY3 at study conduction and JS, PGY5 at study conduction) reviewed the placental cases in the same manner as described above. The placentas selected for inclusion within this study (i.e., submitted to Pathology between 2013 and 2014) had initial historical reports created by the reporting pathologist prior to the publication and widespread implementation of Amsterdam consensus statement criteria. Thus, de novo examination of the placental slides with the proposed synoptic report acted as a method of objectively putting into practice the consensus statement criteria while additionally assessing other placental lesions of interest. Data were analyzed using Microsoft Excel 2010 for descriptive data and GraphPad QuickCalcs ( https://www.graphpad.com/quickcalcs/kappa1/ ) to quantify agreement with kappas which uses equations 18.16 to 18.20 from Fleiss, Statistical Methods for Rates & Proportions, 3rd edition. Descriptive data were expressed as means and standard deviations for normally distributed data or medians with interquartile ranges for non-normally distributed data. To compare the reported findings between the synoptic report and the historical narrative reports, the proportion of lesions not mentioned in the historical narrative report but indicated as a positive finding on the synoptic report was calculated, and vice versa. A post-hoc analysis was also completed for senior pathologists’ who participated in the study, to compare their diagnoses on the historic narrative report (DED, 26 cases and DG, 49 cases) to those that were found with the synoptic report. This data is presented in Supplemental Appendices 2 and 3 . Interobserver agreement between senior pathologists and between resident pathologists for each lesion was assessed using weighted kappa scores. Weighted kappa scores assume that categories are ordered and accounts for how far apart raters are, using linear weights. To assess agreement between the residents and the senior pathologists, non-weighted “binary” kappa scores were calculated. The scoring of placental slides by each lesion, completed by the resident pathologists was reviewed and compiled. A masterlist was created for the resident pathologists and for each placental case if any one, or both, of the residents indicated the lesion present, the lesion was noted to be present (i.e., =1). If both residents indicated the lesion was absent, it was given a score of 0 in the masterlist. This same process was applied to the scoring of placental lesions by senior pathologists. Kappa scores were calculated using the masterlist to assess level of agreement between resident and senior pathologists regarding the presence/absence of each distinct placental lesion included within the synoptic report. A similar non-weighted, post-hoc analysis was completed to compare each resident pathologist’s interobserver agreement to the senior pathologists. Kappa scores were interpreted as follows: <0.40 indicated poor agreement between reviewers, 0.41–0.75 indicated fair to good agreement, and values >0.75 were considered excellent agreement. Mean (SD) kappa scores were calculated for each category of placental lesions, stratified by the analyses stated above (senior pathologists, resident pathologists, and senior vs resident pathologists). Cohort Characteristics Of the initial 100 placental cases that were randomly selected for inclusion in the study, 42 (42%) were missing gestational age at delivery. These cases were excluded and review of an additional 94 cases was required to achieve the complete cohort of 100 cases which met eligibility criteria. The demographics of the cohort are shown in . The most common indication for submission of the placenta for pathology examination was preterm birth (27%), followed by maternal history (18%) and fetal anomalies (17%). The majority of births were by vaginal delivery (62%). Median gestational and maternal ages at delivery were 37 weeks (Q1, Q3 [33, 39]), and 31 years (Q1, Q3 [27, 35]), respectively and mean birthweight percentile was 39.8 (Q1, Q3 [14.0, 58.0]). Narrative vs Synoptic Reporting and Detection of Placental Lesions demonstrates the detection of placental lesions when using the synoptic report vs detection included in the historical narrative report. When comparing the narrative reports to the synoptic reports across all placentas and lesion categories, the synoptic reporting tool detected 169 instances of placental lesions that were missed in the narrative report, at a rate of 51.4%. Occasionally, cases were identified in the historical narrative report but not identified in the synoptic report, which occurred for a total of 32 instances, at a rate of 24.7%. The results of our post-hoc analysis, comparing the diagnoses of the study pathologists original historic narrative reports to those from their de novo synoptic reports are presented in Supplemental Appendices 2 and 3 . Interestingly, as shown in Supplemental Appendix 3 , cases originally signed out by DED demonstrated a greater rate of instances of placental lesions recorded in the narrative report as compared to the synoptic report (average 45.6% across all lesions), with this difference most notable within the category of maternal vascular malperfusion lesions. Interobserver Agreement Between Pathologists Using the Synoptic Reporting Tool We examined interobserver agreement using the synoptic reporting tool comparing senior pathologists to each other, resident pathologists to each other, and comparing the residents to the senior pathologists to assess the consistency of information collected using this tool when applied by users with different experiential/training backgrounds—a metric required for consideration of future implementation of this tool in either clinical or research settings. When assessing interobserver agreement between senior pathologists using the synoptic reporting tool , 4 out of the total 32 lesions were not identified in any of the placentas by the senior pathologists and thus no kappa was calculated for the following 4 lesions: villous stromal-vascular karyorrhexis, maternal floor infarct pattern, infectious villitis, and chronic intervillositis. Of the remaining 28 placental lesions, 18 (64.3%) demonstrated fair to excellent agreement (k ≥ 0.40). When the synoptic tool was used by resident pathologists, a considerably lower interobserver agreement was obtained, with reporting on only 8 of the total 31 placental lesions identified in the cohort (25.8%) demonstrating fair to excellent interobserver agreement. It should be noted that 1 lesion was not called by either resident pathologist using version 1.7 of the synoptic report, thus 31 kappas were calculated out of the 32 lesions in the report. Interestingly, a higher degree of interobserver agreement was observed between senior and junior pathologists, with 15 of 31 total identified placental lesions (48.4%) demonstrating fair to excellent interobserver agreement (kappa ≥0.40). Our post-hoc analysis examined the kappa scores between each resident pathologist and both senior pathologists together to determine if there were significant differences in reporting of placental lesions between residents. The results can be found in and demonstrate similar average kappa scores across all categories. When examining lesions with the highest degree of interobserver agreement between all pathologists (all levels of training/experience) the lesions associated with evidence of ascending intrauterine infection—including maternal and fetal inflammatory responses (category 4), demonstrated excellent agreement (all comparisons generated kappa scores ≥0.75). Senior pathologists additionally had high levels of agreement for placental lesions in category 7—evidence of chronic utero-placental separation. The average kappa score for this category was (0.71, SD = ±0.39), with strong interobserver agreement for lesions of chorionic hemosiderosis (k = 1.00, SD = ±0) and retroplacental adherent hematoma (k = 0.86, SD = ±0.094), although there was poor interobserver agreement seen for laminar necrosis of the decidua capsularis (k = 0.26, SD = ±0.228). Interestingly, the resident pathologists had very poor agreement for this same category of lesions with an average kappa score of 0.04, SD = ±0.08, and even one kappa score less than 0 (i.e., suggesting agreement worse than expected by chance) for chorionic hemosiderosis lesions. The senior pathologists had the overall lowest agreement for lesions in category 5—evidence of placenta villous maldevelopment (average kappa score = 0.08, SD = ±0.16), which included all lesions subject to much interobserver variability: chorangiosis, chorangioma and delayed villous maturation. In comparison, the residents had a wide variation in levels of interobserver agreement for this same category of lesions, with an average kappa score of 0.43 (SD = ±0.52). Less than chance levels of agreement were observed for chorangioma (k = −0.010), but excellent consensus was reached for chorangiosis (k = 1.00). Of the initial 100 placental cases that were randomly selected for inclusion in the study, 42 (42%) were missing gestational age at delivery. These cases were excluded and review of an additional 94 cases was required to achieve the complete cohort of 100 cases which met eligibility criteria. The demographics of the cohort are shown in . The most common indication for submission of the placenta for pathology examination was preterm birth (27%), followed by maternal history (18%) and fetal anomalies (17%). The majority of births were by vaginal delivery (62%). Median gestational and maternal ages at delivery were 37 weeks (Q1, Q3 [33, 39]), and 31 years (Q1, Q3 [27, 35]), respectively and mean birthweight percentile was 39.8 (Q1, Q3 [14.0, 58.0]). demonstrates the detection of placental lesions when using the synoptic report vs detection included in the historical narrative report. When comparing the narrative reports to the synoptic reports across all placentas and lesion categories, the synoptic reporting tool detected 169 instances of placental lesions that were missed in the narrative report, at a rate of 51.4%. Occasionally, cases were identified in the historical narrative report but not identified in the synoptic report, which occurred for a total of 32 instances, at a rate of 24.7%. The results of our post-hoc analysis, comparing the diagnoses of the study pathologists original historic narrative reports to those from their de novo synoptic reports are presented in Supplemental Appendices 2 and 3 . Interestingly, as shown in Supplemental Appendix 3 , cases originally signed out by DED demonstrated a greater rate of instances of placental lesions recorded in the narrative report as compared to the synoptic report (average 45.6% across all lesions), with this difference most notable within the category of maternal vascular malperfusion lesions. We examined interobserver agreement using the synoptic reporting tool comparing senior pathologists to each other, resident pathologists to each other, and comparing the residents to the senior pathologists to assess the consistency of information collected using this tool when applied by users with different experiential/training backgrounds—a metric required for consideration of future implementation of this tool in either clinical or research settings. When assessing interobserver agreement between senior pathologists using the synoptic reporting tool , 4 out of the total 32 lesions were not identified in any of the placentas by the senior pathologists and thus no kappa was calculated for the following 4 lesions: villous stromal-vascular karyorrhexis, maternal floor infarct pattern, infectious villitis, and chronic intervillositis. Of the remaining 28 placental lesions, 18 (64.3%) demonstrated fair to excellent agreement (k ≥ 0.40). When the synoptic tool was used by resident pathologists, a considerably lower interobserver agreement was obtained, with reporting on only 8 of the total 31 placental lesions identified in the cohort (25.8%) demonstrating fair to excellent interobserver agreement. It should be noted that 1 lesion was not called by either resident pathologist using version 1.7 of the synoptic report, thus 31 kappas were calculated out of the 32 lesions in the report. Interestingly, a higher degree of interobserver agreement was observed between senior and junior pathologists, with 15 of 31 total identified placental lesions (48.4%) demonstrating fair to excellent interobserver agreement (kappa ≥0.40). Our post-hoc analysis examined the kappa scores between each resident pathologist and both senior pathologists together to determine if there were significant differences in reporting of placental lesions between residents. The results can be found in and demonstrate similar average kappa scores across all categories. When examining lesions with the highest degree of interobserver agreement between all pathologists (all levels of training/experience) the lesions associated with evidence of ascending intrauterine infection—including maternal and fetal inflammatory responses (category 4), demonstrated excellent agreement (all comparisons generated kappa scores ≥0.75). Senior pathologists additionally had high levels of agreement for placental lesions in category 7—evidence of chronic utero-placental separation. The average kappa score for this category was (0.71, SD = ±0.39), with strong interobserver agreement for lesions of chorionic hemosiderosis (k = 1.00, SD = ±0) and retroplacental adherent hematoma (k = 0.86, SD = ±0.094), although there was poor interobserver agreement seen for laminar necrosis of the decidua capsularis (k = 0.26, SD = ±0.228). Interestingly, the resident pathologists had very poor agreement for this same category of lesions with an average kappa score of 0.04, SD = ±0.08, and even one kappa score less than 0 (i.e., suggesting agreement worse than expected by chance) for chorionic hemosiderosis lesions. The senior pathologists had the overall lowest agreement for lesions in category 5—evidence of placenta villous maldevelopment (average kappa score = 0.08, SD = ±0.16), which included all lesions subject to much interobserver variability: chorangiosis, chorangioma and delayed villous maturation. In comparison, the residents had a wide variation in levels of interobserver agreement for this same category of lesions, with an average kappa score of 0.43 (SD = ±0.52). Less than chance levels of agreement were observed for chorangioma (k = −0.010), but excellent consensus was reached for chorangiosis (k = 1.00). Placental histopathological examination is an often overlooked, but valuable clinical tool to investigate the etiology of adverse pregnancy outcomes. , , Compared to other fields, placental examination is still in its infancy with a multitude of avenues for further work and improvement. Several challenges exist within the field of placenta pathology including poor interobserver reliability, reporting of lesions of unclear clinical significance and lack of consensus on diagnostic reporting criteria. - Until recently, with the establishment of the Amsterdam consensus statement criteria, there have been few efforts for international standardization of diagnostic criteria in placental assessments and there is a lack of implementation of synoptic reporting as compared to other areas of pathology. , , Here we sought to assess the potential clinical and/or research utility of a synoptic reporting tool for placental pathology that builds on Amsterdam consensus criteria, by comparing the pathology findings reported when using a synoptic vs historical narrative approach. Moreover, we assessed interobserver agreement between resident and senior perinatal pathologists when using the synoptic tool to determine the reproducibility of data collected by users with varying experiential and training backgrounds. Synoptic reporting, with a line-by-line evaluation of a data element followed by a response, has been incorporated into oncologic pathology reporting practices for decades and with the College of American Pathologists (CAP) as major driver, synoptic reporting is now a mainstay in the field of oncology. , , Many studies to date, mainly in the field of oncology, have demonstrated the numerous benefits of synoptic reporting over traditional narrative reporting, including increasing the completeness of pathology reports, better reporting quality, higher degrees of satisfaction amongst the entire care team and the potential for data linkage and population-level research. , , , Although synoptic reporting has been the most widespread in cancer care, there have been reports of its uptake in other areas including operative reporting and radiology, which demonstrate similar benefits. - To date, however, there has been no clear evidence of the use or benefit of synoptic reporting in the domain of placenta pathology. With the movement toward international consensus on diagnostic criteria in placental pathology, the adoption of a synoptic report such as the one proposed by Benton et al and utilized in this present study will be of particular benefit in this field. In our study, using the synoptic reporting tool, 169 placental lesions across all cases were identified that were originally missed in the narrative report. The synoptic report also identified 100% of cases that were missed in the narrative report with respect to the lesions of increased basement membrane fibrin (1 case total) and laminar necrosis of the decidua capsularis (6 cases total). Although these lesions were relatively uncommon in our sample, this highlights the potential value of synoptic reporting for detection and reporting of more rare lesions, however the clinical utility of these additional findings remains to be determined. Previous work has demonstrated that laminar necrosis is a distinct form of necrosis and has been associated with placental hypoxia. As such, laminar necrosis can be seen in the setting of intrauterine growth restriction and hypertensive disorders of pregnancy, with potential for significant maternal and fetal morbidity and mortality. - While the Amsterdam Consensus Statement notes that there is insufficient evidence to include these lesions under the category of maternal vascular malperfusion, including such a lesion in a comprehensive placenta pathology synoptic report such as ours, is important for further data collection in order to better define such lesions, clinical associations and recurrence risk for future pregnancies. The synoptic report essentially acts as a visual cue, helping to identify less common lesions which could be overlooked and not reported. Interestingly, even lesions that have been well-defined by the Society of Pediatric Pathology and the Amsterdam Consensus Statement (namely maternal vascular malperfusion lesions, fetal vascular malperfusion lesions and maternal and fetal inflammatory responses in ascending intrauterine infection), were more frequently reported using the synoptic approach. It is important to note that these findings cannot be entirely attributed to the use of a synoptic report alone, as the de novo slide reviews conducted in this study were carried out following the publication and dissemination of the Amsterdam consensus criteria. As such, the pathologists reviewing these cases at time of this second review were familiar with and would have incorporated these consensus guidelines into their practice. Nevertheless, the embedding of the Amsterdam consensus diagnostic criteria into the synoptic reporting tool most certainly could help ensure the appropriate implementation of the census guidelines into clinical and/or research practice in the field. Regarding distal villous hypoplasia, these lesions were more frequently picked up in the narrative report as compared to de novo slide review with the synoptic report. As shown in the post-hoc analysis with senior pathologist DED, maternal vascular malperfusion lesions were overall more frequently recorded in the narrative report as well, as compared to the synoptic report. Again, practice changes and familiarity with Amsterdam consensus statement criteria likely are at play here, however it is possible that having the diagnostic criteria readily available and clearly outlined within the synoptic report may lead to less “over-calling” of these placental lesions. The synoptic report tested in the current study is quite extensive and includes a wide range of diverse placenta lesions, and as such future work will need to focus on refining this tool to ensure included lesions demonstrate clinical importance. In oncologic pathology, the success of synoptic reporting is certainly the result of widespread and international body consensus regarding the types of lesions to report on and their clinical utility. In the field of placenta pathology this same degree of practice consensus will be needed to encourage clinical uptake. The research presented here is an important first step in assessing the potential utility of such a tool in this field, however it will be the results of ongoing research endeavors by our group and others, which aim to engage all relevant stakeholders—including pathologists, obstetricians/midwives, neonatologists, placental biologists, and patients alike—that will ultimately help to refine and fine-tune a synoptic reporting tool that will demonstrate strong clinical utility that can serve to improve clinical management and patient counseling following an adverse pregnancy outcome. Certainly, a strong case can be made for the use of a synoptic tool, such as the one tested here in its present form, for the collection of robust and standardized research data. Ultimately it will be the collection of these comprehensive placenta pathology datasets, which can be linked to maternal and neonatal health outcomes and/or biological measurements, that will allow us to determine the clinical significance of different placenta pathology findings. Our second objective with the current study was to assess interobserver variability between senior perinatal pathologists and pathology residents using the synoptic report for reproducibility and practicality purposes. In this analysis it was noted that agreement was weaker among resident pathologists, with only 26% of lesions demonstrating fair to excellent agreement, compared to 64% of placental lesions for senior pathologists. Among residents, good consensus was reached for well-defined lesions such as maternal and fetal inflammatory response in ascending intrauterine infection, however rarer lesions such as massive perivillous fibrin deposition, maternal floor infarct pattern, chorionic hemosiderosis, and chorangioma demonstrated poor agreement, likely speaking to a differential in experience and exposure between resident pathologists. It is unsurprising that subspecialty-trained perinatal pathologists reached better overall agreement than the residents as pathology is a highly visual specialty and experience is known to make a difference in diagnostic accuracy. , For all pathologists, poor agreement was seen for lesions that were less common in our sample (incidence <5 cases) such as chorangioma, and lesions that have been historically difficult to achieve consensus, such as distal villous hypoplasia. Thus, despite the additional training and expertise in the field of perinatal pathology, there appears to be subjectivity and/or misunderstanding that underlies lower levels of agreement. When reviewing placenta cases, senior pathologists likely approach cases with a differential in training experiences and style of reporting. Even with the synoptic report acting as a guiding tool, some placental lesions have diagnostic nuances that are inherently subjective. In a study by Redline et al, in which placental cases were examined for 11 lesions by 8 perinatal pathologists, interobserver agreement ranged from kappa values of 0.25 to 0.61 with lowest agreement for increased intervillous fibrin lesions. Authors noted several factors contributing to variability including differing interpretations of diagnostic criteria, personal biases, and lacking standardized measuring devices. Furthermore, in a single-center study by Al-Adnani et al, an audit of 164 singleton placentas by 4 perinatal pathologists was completed to assess for delayed villous maturation (DVM). From the 38 cases that were reported to show DVM by at least 1 pathologist, consensus with at least 3 pathologists was found only in 14 cases. Issues in concordance were postulated to be due to conflicting diagnosis criteria and degree of placental immaturity deemed significant. While the implementation of a synoptic report would mitigate the possibility of competing differences in diagnostic criteria, assessing the severity of lesions is still nuanced and practices can vary. To improve agreement and generalizability in using the synoptic reporting tool, our team is working to convert the synoptic report into an electronic form with representative sample images embedded to serve as a reference/template for reporting pathologists. In our study, resident pathologists served as surrogates for non-subspecialty trained pathologists. The results reinforce the notion that placental pathology is a field where advanced training and experience makes a difference in the accuracy of understanding diagnostic and severity criteria. The synoptic tool, however, can be important in histopathology education and training, highlighting where training may be lacking, and which lesion diagnostic criteria could be refined. Additional subspecialized training specific to perinatal pathology could be an important avenue for general pathologists in community-based non-academic settings. Continuing professional development courses are currently available through the College of American Pathologist and similar organizations. Future work to develop additional training in perinatal pathology could provide a background for non-subspecialty pathologists to review placenta cases. With the complement of a synoptic reporting tool as a guide and framework, trainees and non-subspecialty-trained pathologists could refer to the tool when producing a report, helping to make placental pathology more accessible. Strengths of this study include the examination of placentas by both resident and senior subspecialty-trained perinatal pathologists to examine the functionality of the synoptic reporting tool with respect to various stages of training. Additionally, all pathologists were blinded to previous placental examination records, clinical information and reviews were conducted separately by each pathologist. Our study was a preliminary investigation and was limited by sample size, thus for lesions that were uncommon, disagreement on one placenta had a greater negative impact on the overall kappa score. Additionally, narrative reports included within the study were signed out by any of the pediatric pathologists at CHEO at that time and the analysis was not restricted to those reports signed out by senior perinatal pathologists DED or DG who performed de novo review of the placenta cases using the synoptic report. It is also important to consider the fact that reporting practices and habits may have naturally evolved in the time between the initial narrative report and de the novo review with the synoptic report. Importantly, in the context of a retrospective review of pathology cases for the purposes of this research study, it is likely that the of de novo placenta pathology report findings would be superior to historical reports to some extent, due to the widespread dissemination and clinical uptake of the Amsterdam consensus. Despite the potential benefits of synoptic reporting, an important consideration is the perceived and/or realized increase in workload with the completion of a comprehensive synoptic report. We recognize that the synoptic report tested within this study is lengthy and would be burdensome to reporting pathologists, thus is most appropriate for research settings in its present form. As discussed above, refinement of this tool with an emphasis on lesions with high clinical relevance, and potential incorporation into a template for electronic medical records would serve to reduce such burden. It will further be of high priority to envision and develop machine learning algorithms capable of combining key elements of the pathology report into a “top-line” diagnosis, meaning a clinically significant and meaningful output that is beneficial to all stakeholders. This area of work is already underway by our group and others, including work by Freedman et al who is formulating meaningful placental phenotypes based on MVM, FVM, and chronic inflammatory lesions. The results of these ongoing projects will certainly help to move this field forward, envisioning a future in which the systematic collection of placenta pathology data can be used to better understand the disease process, recurrence risk in future pregnancies, and future health risks for mother and infants following an adverse pregnancy outcome. In this study, we sought to evaluate a novel synoptic reporting tool for placental pathology, building on the Amsterdam consensus statement criteria. We propose that synoptic reporting is one method to help address the current issues in standardization and reporting of placental lesions. We demonstrated that this tool can help in categorizing captured placental pathology data for research purposes and generally helped to identify more lesions than historical narrative reporting (although this finding was not uniform). Kappa analysis was completed to assess the reliability and reproducibility amongst pathologists when using the synoptic tool, and demonstrated fair reproducibility of results when the tool is used by senior pathologist users. Future directions include engagement with key stakeholders to further refine the synoptic report to ensure clinical utility, and the application of synoptic reporting tools to capture robust placenta pathology data in research settings to better understand placenta-mediated diseases of pregnancy and the clinical importance of different placental lesions for the management and counseling of patients following an adverse pregnancy outcome. sj-docx-1-pdp-10.1177_10935266231164446 – Supplemental material for Synoptic Reporting in Clinical Placental Pathology: A Preliminary Investigation Into Report Findings and Interobserver Agreement Click here for additional data file. Supplemental material, sj-docx-1-pdp-10.1177_10935266231164446 for Synoptic Reporting in Clinical Placental Pathology: A Preliminary Investigation Into Report Findings and Interobserver Agreement by Sonia R. Dancey, Samantha J. Benton, Anthea J. Lafreniere, Michal Leckie, Benjamin McLeod, Jordan Sim, Dina El-Demellawy, David Grynspan and Shannon A. Bainbridge in Pediatric and Developmental Pathology sj-docx-2-pdp-10.1177_10935266231164446 – Supplemental material for Synoptic Reporting in Clinical Placental Pathology: A Preliminary Investigation Into Report Findings and Interobserver Agreement Click here for additional data file. Supplemental material, sj-docx-2-pdp-10.1177_10935266231164446 for Synoptic Reporting in Clinical Placental Pathology: A Preliminary Investigation Into Report Findings and Interobserver Agreement by Sonia R. Dancey, Samantha J. Benton, Anthea J. Lafreniere, Michal Leckie, Benjamin McLeod, Jordan Sim, Dina El-Demellawy, David Grynspan and Shannon A. Bainbridge in Pediatric and Developmental Pathology sj-docx-3-pdp-10.1177_10935266231164446 – Supplemental material for Synoptic Reporting in Clinical Placental Pathology: A Preliminary Investigation Into Report Findings and Interobserver Agreement Click here for additional data file. Supplemental material, sj-docx-3-pdp-10.1177_10935266231164446 for Synoptic Reporting in Clinical Placental Pathology: A Preliminary Investigation Into Report Findings and Interobserver Agreement by Sonia R. Dancey, Samantha J. Benton, Anthea J. Lafreniere, Michal Leckie, Benjamin McLeod, Jordan Sim, Dina El-Demellawy, David Grynspan and Shannon A. Bainbridge in Pediatric and Developmental Pathology |
Impact of Brief Lactation Rotation in Residency on Decision to Refer for Lactation Support | 684c1b38-e3f3-4192-8a2f-c9f9060902e8 | 11544676 | Pediatrics[mh] | Despite the well-documented benefits of breastfeeding, physicians have historically reported that they receive little education about breastfeeding. While education surrounding human lactation in medical education and residency programs has increased, more than 50% of physicians report that their breastfeeding training in residency was inadequate, and only approximately half rate themselves as effective at providing breastfeeding counseling. Additionally, most indicated a lack of time to counsel families about breastfeeding and a desire for additional interactive breastfeeding education. - Assisting with breastfeeding difficulties is time-intensive and may require several follow-up appointments before problems are fully resolved. Lactation consultants often work with patients in recurrent appointments lasting 1 to 2 hours to address lactation problems. Due to time constraints, many lactation-related issues cannot be thoroughly addressed by physicians in a typical well-child visit. Physicians may find it helpful to refer patients with lactation-related issues to lactation consultants in the same way they would refer a patient with a speech or motor disorder to a speech-language pathologist, occupational therapist, or physical therapist. In the Surgeon General’s Call to Action to Support Breastfeeding, physicians are encouraged to utilize a team approach to support those having trouble breastfeeding, and it explicitly states that this team should include an International Board Certified Lactation Consultant (IBCLC). However, some research suggests that challenges with collaboration can hinder patient access to skilled lactation support. A systematic review of qualitative studies identified lack of referrals and delayed referrals as barriers to providing appropriate breastfeeding support. Similarly, in a recent survey of lactation support providers in Appalachia, 84.3% identified challenges related to other health professionals as a barrier influencing their ability to provide lactation support. Although prior research has shown that lack of continuity of care is a barrier to coordinated breastfeeding support, no research has been done to assess the impact of breastfeeding-related residency training on physician decisions to refer to lactation consultants. Because lactation consultants may help reduce physician workload, improve parental breastfeeding self-efficacy, and increase patient satisfaction, it is important to determine factors that affect physician decisions to refer to lactation consultants. , In addition to providing medical residents with breastfeeding education, outpatient rotations with lactation consultants familiarize physicians with the lactation consultant’s role beyond the hospital setting and may increase referrals to outpatient lactation consultants in clinical practice. Therefore, the objective of this study was to examine the post-residency referral patterns of physicians to lactation consultants among former pediatric, family medicine, and medicine-pediatrics residents who did or did not participate in a 4- to 8-hour outpatient lactation rotation at an urban academic lactation clinic during residency. Study Design, Setting, and Participants A cross-sectional observational survey design was used for this study, conducted at an academic medical center in a major Texas city. Data collection occurred during the 6 weeks from October 9, 2023, to November 20, 2023. Participants were physicians who were eligible for inclusion if they (a) completed medical residency at a pediatrics, family medicine, or combined medicine-pediatrics program that routinely has residents participate in a brief outpatient lactation rotation, (b) graduated from residency from 2013 to 2022, and (c) were currently practicing medicine in the United States. Participants who indicated that they never care for breastfeeding patients were excluded from the study. Emails were sent to 461 residency graduates inviting them to complete an anonymous online survey in REDCap. A reminder email was sent 1 week following the initial survey invitation, and the survey was programmed to accept responses for 6 weeks. An a priori power analysis was conducted using G*Power version 3.1.9.7 to determine the minimal sample size needed for our analyses. With 80% power, an alpha level of .05, and 5 predictors, at least 55 responses were required to detect a medium effect ( f 2 = 0.15) or at least 25 responses to detect a large effect ( f 2 = 0.35) in a multivariable linear regression model. Therefore, our goal was to have a response rate of at least 11.9% to achieve 55 total responses. A participant flow diagram illustrating the number of individuals who received the questionnaire, those who completed it or were excluded, and the final distribution of participants in the exposure and control groups is provided in . Exposure and Control Groups Resident physicians completed brief outpatient lactation rotations in 1 to 2 4-hour blocks at an academic outpatient breastfeeding clinic staffed by board-certified lactation consultants. Rotations typically consisted of basic breastfeeding education and observation of lactation clinic visits. Clinic visits were conducted by lactation consultants in 75 to 90-minute intervals. A typical visit included a history and assessment of the lactating parent and breastfeeding infant and a pre-feed and post-feed weight measurement to assess overall growth and milk transfer from parent to infant. Shared decision-making principles were used to develop a post-visit plan for parents to implement at home. Although many residents in the collaborating residency programs were able to attend the outpatient lactation rotation, some residents were reassigned to cover other services on the days they were scheduled for the rotation or were absent due to illness or vacation. Others completed residency in a year that their program did not send residents to complete the rotation. In this study, the residents who did not complete the outpatient lactation rotation served as the control group, while the residents who completed the rotation comprised the exposure group. Survey Items The primary outcome of this study was lactation referral frequency, which was assessed by self-report. Respondents were asked: “For your patients who are struggling with breastfeeding, how often do you recommend that they get breastfeeding assistance from a lactation consultant?” Response options ranged from 0 to 4 where 0 = never and 4 = always . Respondents were only asked this question if they answered yes when asked, “In your current practice setting, do you ever assess or obtain information about how breastfeeding is going?” as those who do not ever assess breastfeeding would not have the need to refer to a lactation consultant. Demographic information, including gender, race/ethnicity, and birth year, was obtained from respondents through self-report (see supplementary file ). Additional variables were included because of their theoretical potential to influence referral frequency. These included how frequently respondents care for breastfeeding patients (1 = very rarely to 5 = very often ), whether there are lactation consultants available to support breastfeeding in the geographic region in which the respondent practices, whether they had any other clinical education experience with a lactation consultant (in the hospital, in a patient’s home, or at another outpatient clinic), and infant feeding attitudes operationalized through the 17-item Iowa Infant Feeding Attitude Scale (IIFAS). The IIFAS is used to measure maternal attitudes toward infant feeding and has demonstrated reliability and validity in a wide variety of populations. Items responses are selected on a 5-point scale (1 = strongly disagree , 5 = strongly agree ) with 9 items reverse scored. Scores range from 17 to 85, with higher scores indicating a preference for breastfeeding. In initial psychometric testing, the Cronbach’s alpha of the IIFAS ranged from .85 to .86, indicating excellent reliability. Although the IIFAS was originally designed to be used in maternal populations, it has demonstrated acceptable reliability in other populations, including males aged 21 to 44 in the United States (α = .78), undergraduate nursing students (α = .74), and medical students (α = .77). In this study’s sample, the Cronbach alpha coefficient for the IIFAS was .74. Statistical Analysis Descriptive statistics are reported for all variables, including means and standard deviations for normally distributed variables and medians and interquartile ranges for non-normal variables. For each variable, differences between the exposure and control groups were assessed using independent samples t tests, Chi square tests, or Fisher exact tests. For all variables except the IIFAS score, cases were excluded in an analysis when there was missing data for an included variable (see ). For the IIFAS score, responses with no more than 1 missing item were retained. For these responses, the value of the missing item was imputed using maximum likelihood estimation (expectation-maximization algorithm), and the score for the 17 scale items was subsequently calculated. An initial bivariate analysis was completed to determine if there was a significant difference in the frequency of referring patients who are struggling with breastfeeding to a lactation consultant between the exposure and control groups. Due to the ordinal nature of the outcome variable, a Mann-Whitney U test was initially used to assess the difference between the 2 groups. Then, a multivariable model adjusted for the potential impact of those variables with bivariate differences between groups at P values ≤.10. In this model, we also adjusted for IIFAS score, the presence of a lactation consultant in the physician’s geographic area of practice, and participation in other lactation rotations based on the theoretical likelihood that these variables may have significantly impacted the frequency of referral to lactation consultants. The primary multivariable model evaluated the relationship between the completion of the brief outpatient lactation rotation and reported referral frequency using multivariable linear regression. Although the outcome variable is ordinal, it was entered in the linear regression model here as an ordinal approximation of a continuous variable. Linear regression was chosen as the primary model over ordinal regression for 2 reasons. First, because linear regression is robust, ordinal variables with 5 or more categories can be used as continuous variables without introducing significant bias. , Second, linear regression models are interpreted in a more straightforward manner than ordinal regression models. Backward stepwise elimination was used to select variables for the final regression model. Variables with P values >.10 were removed, starting with the variable with the highest P value and ending when all variables in the model had P values ≤.10. A sensitivity analysis was conducted using a multivariable ordinal regression model, including the same variables in the final linear regression model. A subgroup analysis was conducted with responses from participants who indicated that there is a lactation consultant available within their geographic region of practice using both multivariable linear and ordinal regression models. SPSS version 29 was used for data analysis. A cross-sectional observational survey design was used for this study, conducted at an academic medical center in a major Texas city. Data collection occurred during the 6 weeks from October 9, 2023, to November 20, 2023. Participants were physicians who were eligible for inclusion if they (a) completed medical residency at a pediatrics, family medicine, or combined medicine-pediatrics program that routinely has residents participate in a brief outpatient lactation rotation, (b) graduated from residency from 2013 to 2022, and (c) were currently practicing medicine in the United States. Participants who indicated that they never care for breastfeeding patients were excluded from the study. Emails were sent to 461 residency graduates inviting them to complete an anonymous online survey in REDCap. A reminder email was sent 1 week following the initial survey invitation, and the survey was programmed to accept responses for 6 weeks. An a priori power analysis was conducted using G*Power version 3.1.9.7 to determine the minimal sample size needed for our analyses. With 80% power, an alpha level of .05, and 5 predictors, at least 55 responses were required to detect a medium effect ( f 2 = 0.15) or at least 25 responses to detect a large effect ( f 2 = 0.35) in a multivariable linear regression model. Therefore, our goal was to have a response rate of at least 11.9% to achieve 55 total responses. A participant flow diagram illustrating the number of individuals who received the questionnaire, those who completed it or were excluded, and the final distribution of participants in the exposure and control groups is provided in . Resident physicians completed brief outpatient lactation rotations in 1 to 2 4-hour blocks at an academic outpatient breastfeeding clinic staffed by board-certified lactation consultants. Rotations typically consisted of basic breastfeeding education and observation of lactation clinic visits. Clinic visits were conducted by lactation consultants in 75 to 90-minute intervals. A typical visit included a history and assessment of the lactating parent and breastfeeding infant and a pre-feed and post-feed weight measurement to assess overall growth and milk transfer from parent to infant. Shared decision-making principles were used to develop a post-visit plan for parents to implement at home. Although many residents in the collaborating residency programs were able to attend the outpatient lactation rotation, some residents were reassigned to cover other services on the days they were scheduled for the rotation or were absent due to illness or vacation. Others completed residency in a year that their program did not send residents to complete the rotation. In this study, the residents who did not complete the outpatient lactation rotation served as the control group, while the residents who completed the rotation comprised the exposure group. The primary outcome of this study was lactation referral frequency, which was assessed by self-report. Respondents were asked: “For your patients who are struggling with breastfeeding, how often do you recommend that they get breastfeeding assistance from a lactation consultant?” Response options ranged from 0 to 4 where 0 = never and 4 = always . Respondents were only asked this question if they answered yes when asked, “In your current practice setting, do you ever assess or obtain information about how breastfeeding is going?” as those who do not ever assess breastfeeding would not have the need to refer to a lactation consultant. Demographic information, including gender, race/ethnicity, and birth year, was obtained from respondents through self-report (see supplementary file ). Additional variables were included because of their theoretical potential to influence referral frequency. These included how frequently respondents care for breastfeeding patients (1 = very rarely to 5 = very often ), whether there are lactation consultants available to support breastfeeding in the geographic region in which the respondent practices, whether they had any other clinical education experience with a lactation consultant (in the hospital, in a patient’s home, or at another outpatient clinic), and infant feeding attitudes operationalized through the 17-item Iowa Infant Feeding Attitude Scale (IIFAS). The IIFAS is used to measure maternal attitudes toward infant feeding and has demonstrated reliability and validity in a wide variety of populations. Items responses are selected on a 5-point scale (1 = strongly disagree , 5 = strongly agree ) with 9 items reverse scored. Scores range from 17 to 85, with higher scores indicating a preference for breastfeeding. In initial psychometric testing, the Cronbach’s alpha of the IIFAS ranged from .85 to .86, indicating excellent reliability. Although the IIFAS was originally designed to be used in maternal populations, it has demonstrated acceptable reliability in other populations, including males aged 21 to 44 in the United States (α = .78), undergraduate nursing students (α = .74), and medical students (α = .77). In this study’s sample, the Cronbach alpha coefficient for the IIFAS was .74. Descriptive statistics are reported for all variables, including means and standard deviations for normally distributed variables and medians and interquartile ranges for non-normal variables. For each variable, differences between the exposure and control groups were assessed using independent samples t tests, Chi square tests, or Fisher exact tests. For all variables except the IIFAS score, cases were excluded in an analysis when there was missing data for an included variable (see ). For the IIFAS score, responses with no more than 1 missing item were retained. For these responses, the value of the missing item was imputed using maximum likelihood estimation (expectation-maximization algorithm), and the score for the 17 scale items was subsequently calculated. An initial bivariate analysis was completed to determine if there was a significant difference in the frequency of referring patients who are struggling with breastfeeding to a lactation consultant between the exposure and control groups. Due to the ordinal nature of the outcome variable, a Mann-Whitney U test was initially used to assess the difference between the 2 groups. Then, a multivariable model adjusted for the potential impact of those variables with bivariate differences between groups at P values ≤.10. In this model, we also adjusted for IIFAS score, the presence of a lactation consultant in the physician’s geographic area of practice, and participation in other lactation rotations based on the theoretical likelihood that these variables may have significantly impacted the frequency of referral to lactation consultants. The primary multivariable model evaluated the relationship between the completion of the brief outpatient lactation rotation and reported referral frequency using multivariable linear regression. Although the outcome variable is ordinal, it was entered in the linear regression model here as an ordinal approximation of a continuous variable. Linear regression was chosen as the primary model over ordinal regression for 2 reasons. First, because linear regression is robust, ordinal variables with 5 or more categories can be used as continuous variables without introducing significant bias. , Second, linear regression models are interpreted in a more straightforward manner than ordinal regression models. Backward stepwise elimination was used to select variables for the final regression model. Variables with P values >.10 were removed, starting with the variable with the highest P value and ending when all variables in the model had P values ≤.10. A sensitivity analysis was conducted using a multivariable ordinal regression model, including the same variables in the final linear regression model. A subgroup analysis was conducted with responses from participants who indicated that there is a lactation consultant available within their geographic region of practice using both multivariable linear and ordinal regression models. SPSS version 29 was used for data analysis. Respondent Characteristics Forty-eight survey responses were received. Two were excluded because the respondent answered only the first few questions before abandoning the survey. Ultimately, 46 responses were included in the analysis, a 10% response rate. Respondent characteristics are displayed in . Most of the respondents were female (72.7%), White (41.3%) or Asian (30.4%), graduates of pediatrics residency programs (58.7%), and currently practicing in a primary care setting (69.6%). Nearly half (44%) indicated that they care for breastfeeding parents or infants very often, and more than 80% indicated that there is a lactation consultant available within their region of practice. Comparing those who completed the brief outpatient lactation rotation with those who did not, there was a significant difference based on residency type, with significantly more former pediatrics residents (92%) completing the rotation compared to former family medicine residents (41%). A higher percentage of females (79%) than males (50%) completed the brief outpatient lactation rotation. There were no significant differences in other experiences with lactation consultants or IIFAS scores between the 2 groups. The mean IIFAS score was 64.2 ± 7.1, indicating the sample had neutral attitudes toward breastfeeding overall. About 72% of respondents who completed the IIFAS had scores corresponding to neutral breastfeeding attitudes (IIFAS scores 49–69), and the remainder (28%) had scores corresponding to positive attitudes toward breastfeeding (IIFAS scores 70–85). No respondents had an IIFAS score that indicated negative attitudes toward breastfeeding. Impact of Brief Outpatient Lactation Rotation on Physician Referrals to Lactation Consultants An initial bivariate analysis was conducted to assess the relationship between the completion of the lactation rotation and the decision to refer to lactation consultants, and a significant difference was identified. Respondents who completed the lactation rotation reported significantly higher frequency of referring to lactation consultants than those who did not, U = 164, P = .023. To further evaluate this relationship, a multivariable linear regression was used to adjust for potential covariates. Gender, residency type, presence of lactation consultant in the respondent’s geographic area of practice, completion of any other rotations with lactation consultants, and IIFAS score were entered into the baseline model based on differences between exposure and control groups and theoretical impact on the outcome of interest. Following backward stepwise elimination, the final model was constructed to include rotation completion and to adjust for IIFAS scale score and presence of a lactation consultant in the physician’s geographic region of practice . The final linear regression model ( n = 27) evaluating the relationship between completion of a brief outpatient lactation rotation and frequency of referring to a lactation consultant was significant when adjusting for IIFAS score and presence of a lactation consultant in the respondent’s geographic area of practice . Those who completed the lactation rotation were, on average, 1 level (never, rarely, sometimes, usually, always) more likely to refer to a lactation consultant than those who did not complete the rotation when IIFAS score and presence of a lactation consultant in the respondent’s geographic area of practice were held constant ( B = 1.091, t = 3.231, P = .004). IIFAS score was negatively associated with frequency of physician referral to a lactation consultant such that a 20-point increase in IIFAS score is associated with a 1 level decrease in frequency of referral ( B = −0.052, t = −2.376, P = .026). We conducted a sensitivity analysis using a multivariable ordinal regression ( n = 27; complementary log-log link) with the same 3 predictors . , The predictors accounted for a significant amount of variance in the frequency of referring to a lactation consultant. The odds of those completing the lactation rotation being more frequent referrers in post-residency practice were 8.3 (95% CI, 1.8 to 37.6) times that of those who did not complete the rotation. Every 1-point increase in IIFAS score was associated with a 10% decrease in the odds of being a more frequent referrer. The 3 predictors accounted for approximately 74% of the variance in the frequency of referral to a lactation consultant. In a subgroup analysis of those indicating that there is a lactation consultant available in their geographic area ( n = 24), only completion of the lactation rotation was significantly associated with the frequency of referral to a lactation consultant in both the multivariable linear and ordinal regression models ( and ). IIFAS score was not significant in either model. Forty-eight survey responses were received. Two were excluded because the respondent answered only the first few questions before abandoning the survey. Ultimately, 46 responses were included in the analysis, a 10% response rate. Respondent characteristics are displayed in . Most of the respondents were female (72.7%), White (41.3%) or Asian (30.4%), graduates of pediatrics residency programs (58.7%), and currently practicing in a primary care setting (69.6%). Nearly half (44%) indicated that they care for breastfeeding parents or infants very often, and more than 80% indicated that there is a lactation consultant available within their region of practice. Comparing those who completed the brief outpatient lactation rotation with those who did not, there was a significant difference based on residency type, with significantly more former pediatrics residents (92%) completing the rotation compared to former family medicine residents (41%). A higher percentage of females (79%) than males (50%) completed the brief outpatient lactation rotation. There were no significant differences in other experiences with lactation consultants or IIFAS scores between the 2 groups. The mean IIFAS score was 64.2 ± 7.1, indicating the sample had neutral attitudes toward breastfeeding overall. About 72% of respondents who completed the IIFAS had scores corresponding to neutral breastfeeding attitudes (IIFAS scores 49–69), and the remainder (28%) had scores corresponding to positive attitudes toward breastfeeding (IIFAS scores 70–85). No respondents had an IIFAS score that indicated negative attitudes toward breastfeeding. An initial bivariate analysis was conducted to assess the relationship between the completion of the lactation rotation and the decision to refer to lactation consultants, and a significant difference was identified. Respondents who completed the lactation rotation reported significantly higher frequency of referring to lactation consultants than those who did not, U = 164, P = .023. To further evaluate this relationship, a multivariable linear regression was used to adjust for potential covariates. Gender, residency type, presence of lactation consultant in the respondent’s geographic area of practice, completion of any other rotations with lactation consultants, and IIFAS score were entered into the baseline model based on differences between exposure and control groups and theoretical impact on the outcome of interest. Following backward stepwise elimination, the final model was constructed to include rotation completion and to adjust for IIFAS scale score and presence of a lactation consultant in the physician’s geographic region of practice . The final linear regression model ( n = 27) evaluating the relationship between completion of a brief outpatient lactation rotation and frequency of referring to a lactation consultant was significant when adjusting for IIFAS score and presence of a lactation consultant in the respondent’s geographic area of practice . Those who completed the lactation rotation were, on average, 1 level (never, rarely, sometimes, usually, always) more likely to refer to a lactation consultant than those who did not complete the rotation when IIFAS score and presence of a lactation consultant in the respondent’s geographic area of practice were held constant ( B = 1.091, t = 3.231, P = .004). IIFAS score was negatively associated with frequency of physician referral to a lactation consultant such that a 20-point increase in IIFAS score is associated with a 1 level decrease in frequency of referral ( B = −0.052, t = −2.376, P = .026). We conducted a sensitivity analysis using a multivariable ordinal regression ( n = 27; complementary log-log link) with the same 3 predictors . , The predictors accounted for a significant amount of variance in the frequency of referring to a lactation consultant. The odds of those completing the lactation rotation being more frequent referrers in post-residency practice were 8.3 (95% CI, 1.8 to 37.6) times that of those who did not complete the rotation. Every 1-point increase in IIFAS score was associated with a 10% decrease in the odds of being a more frequent referrer. The 3 predictors accounted for approximately 74% of the variance in the frequency of referral to a lactation consultant. In a subgroup analysis of those indicating that there is a lactation consultant available in their geographic area ( n = 24), only completion of the lactation rotation was significantly associated with the frequency of referral to a lactation consultant in both the multivariable linear and ordinal regression models ( and ). IIFAS score was not significant in either model. In all models, completion of the brief outpatient lactation rotation was significantly associated with a higher frequency of referring patients experiencing breastfeeding difficulties to a lactation consultant. This study provides preliminary evidence that outpatient lactation rotations in residency may increase referrals by physicians to lactation consultants. Although there are no prior studies linking outpatient lactation rotations in residency with future referral patterns, other studies have found an increase in referrals to a specialty among physicians who completed a rotation in that specialty. , It may be that these rotations increase physician familiarity with the specialty and provide knowledge that helps physicians identify patients who would benefit from referral. Evaluating patient utilization of lactation services following physician referrals to lactation consultants exceeded the scope of this study. However, other evidence supports the assumption that increased referrals can improve access to skilled lactation support. In research on cardiac rehabilitation, a strong physician recommendation was a key independent predictor of cardiac rehabilitation participation. Similarly, in a quasi-experimental study of well-child care physician referrals of infants for preventative dental care, both active and passive referrals increased the odds of having a dental visit in the first year of life, with active referrals having a larger effect. In an integrative review, the advice, preferences, and practices of health professionals have been identified as factors influencing women’s infant feeding decisions. In some of the models, infant feeding attitudes were related to referrals in an unexpected direction, with higher IIFAS scores indicative of more positive attitudes toward breastfeeding decreasing the odds of being a frequent referrer. However, in the subsample of participants who reported that lactation consultants were available in their geographic location, infant feeding attitudes were not associated with referral patterns. Physicians without ready access to lactation consultants in their region are less likely to be frequent referrers for apparent reasons: there is either no one available to refer to or a referral would require patients to travel further to see a lactation consultant. It may be that these physicians have more positive attitudes toward breastfeeding if they are more directly involved in the care of their patients due to a lack of referral options. Given the study’s sample size and small effect size of IIFAS in the models, future research is needed to clarify the relationships between physician attitudes toward infant feeding and patterns of referral to lactation consultants. The sampling for this study introduces the possibility of several types of bias. While an email was distributed to every physician who completed residency from each of the 3 residency programs between 2012 and 2023, participation was voluntary. Self-selection bias is inherent in this approach, as those who choose to complete a survey may differ from those who do not. In our case, participants may have been more likely to participate if they had stronger feelings about breastfeeding or breastfeeding education and less likely to participate if they were more ambivalent on these topics. Additionally, social desirability bias may have been a factor. Recruitment emails were sent from the first author’s email address and identified the sender as the director of the outpatient lactation clinic in which exposure group participants completed their rotations. Given the stated goals of the research and the researcher’s identity, it is possible participants responded in a way that they felt would be more desirable to the researchers. To reduce the risk of this type of bias, questions were worded neutrally, and response options included multi-step scales rather than a dichotomous yes or no. Another significant limitation of this study is the response rate of 10%, which falls below our a priori target based on power analysis, raising further concerns about representativeness and non-response bias. Although we sampled the entire eligible population of residency graduates and sent a reminder email to enhance participation, only 46 responses were ultimately usable. Nevertheless, our analyses achieved statistical significance, indicating that the findings remain robust within the context of the responses received. In our study, an outpatient lactation rotation during physician resident training increased the frequency of post-residency referrals to lactation consultants. Including diverse breastfeeding education experiences in physician resident training can increase patient access to lactation support through its integration with other healthcare services. Future research is needed to examine the relationship between breastfeeding attitudes, lactation education, and physician referrals. sj-pdf-1-jpc-10.1177_21501319241298751 – Supplemental material for Impact of Brief Lactation Rotation in Residency on Decision to Refer for Lactation Support Supplemental material, sj-pdf-1-jpc-10.1177_21501319241298751 for Impact of Brief Lactation Rotation in Residency on Decision to Refer for Lactation Support by Megan M. Kindred and Kelsie R. Barta in Journal of Primary Care & Community Health |
Exploring health literacy development through co-design: understanding the expectations for health literacy mediators | 11b76042-8778-40e7-bb80-e74d6a4c04fc | 11879027 | Health Literacy[mh] | Understanding health inequities Health inequities are systematic differences in health status among individual population groups, influenced by factors such as the social determinants of health (SDH) . These disparities stem from historical and contemporary inequities shaped by societal structures and unequal distribution of power and resources . They negatively impact some individuals and societies, leading to poor health outcomes, economic costs, and social disparities . Health inequities are closely linked to non-communicable diseases (NCDs) . Globally, NCDs are a significant health concern, responsible for over 40 million deaths annually, with cardiovascular disease being the leading cause, followed by cancers, respiratory diseases, and diabetes . These diseases are preventable and often linked to modifiable lifestyle factors such as smoking, alcohol use, physical inactivity, and unhealthy diets . Addressing NCDs and health inequities involves coordinated national and international action, focusing on modifiable risk factors, improving access to high-quality chronic care management, and understanding root causes (such as health literacy [HL]) to then inform policies that reduce these disparities and meet the needs of the population, and especially vulnerable groups within it . Health literacy as a key to equity HL plays a crucial role in addressing both health inequity and NCDs by empowering individuals to understand health information, make informed decisions, and engage in self-management. Efforts to improve HL, both traditional and digital, are essential for promoting better health outcomes and reducing the burden of NCDs globally . The current state of HL within the Australian population reveals both strengths and challenges . People with greater HL challenges often experience adverse health outcomes, increased hospitalizations, and poorer health behaviours than those with fewer such challenges . Improving the HL environment through effective communication strategies, embedding HL into policies, and ensuring accessible information can enhance health outcomes and quality of care . Knowing more about an individual’s and a community’s HL provides an important foundation when creating strategies to strengthen or maintain HL assets. HL assets can refer to the skills, knowledge, and resources individuals and communities possess to access, understand, appraise, and use health information effectively; these assets are vital as they empower people to make informed decisions, navigate healthcare systems, and engage in health-promoting behaviours . Whether an individual has the required HL assets required to manage their health, may reflect on their HL strengths and challenges. This paper will investigate a new role focused on creating HL learning opportunities within a community, evaluating the support, expectations, and requirements for this role to inform future implementation. Enhancing HL assets can lead to better health outcomes, bolster health promotion initiatives, and improve overall well-being. Health promotion and HL are distinct yet complementary concepts that together can contribute to improve overall health outcomes. Health promotion focuses on enabling individuals and communities to increase control over and improve their health through broad actions aimed at addressing social, environmental, and individual factors . This includes implementing policies, providing education, and creating supportive environments that facilitate healthier choices . In contrast, HL refers to an individual’s capacity to obtain, process, and understand basic health information needed to make appropriate health decisions . It encompasses people’s knowledge, motivation, and competences to access, understand, appraise, and apply health information effectively . While these concepts differ in their scope and focus, they complement each other in several ways. HL serves as a foundation for effective health promotion, as individuals with stronger HL assets are better equipped to engage with and benefit from health promotion activities . Conversely, health promotion efforts often aim to improve HL as one of their outcomes, enhancing people’s health knowledge and skills through various educational initiatives. Both concepts share the ultimate goal of empowering individuals and communities to take control of their health and are critical for addressing health inequities and achieving broader health and development goals . HL can be viewed as both an outcome of health promotion efforts and a tool that enables further health promotion . In essence, while health promotion provides broader strategies and actions to improve health, HL equips individuals and communities with the skills to effectively engage with these efforts and make informed health decisions. Together, they can create a more comprehensive approach to improving population health. The role of co-design in health promotion Co-designed and community-led health promotion interventions have gained recognition as an effective strategy for addressing complex health issues while ensuring cultural appropriateness and local relevance. This collaborative approach involves engaging community members, researchers, policy-makers, and other stakeholders throughout the development and implementation of health initiatives . By embracing co-design, health promotion efforts can better address health inequities, enhance cultural competence, and lead to more effective and sustainable health outcomes . Additionally, these approaches consider varying levels of HL within communities, making information and interventions accessible and understandable to all . International groups such as the WHO promote using a co-design process to co-design HL solutions . An example of this is Optimizing Health Literacy and Access (Ophelia) process, which aims to improve HL and equitable access to healthcare by implementing locally tailored, evidence-informed solutions in collaboration with communities and stakeholders . This approach begins by assessing the HL requirements of the intended population using the Health Literacy Questionnaire (HLQ) . The HLQ was created to capture the multi-dimensional nature of HL . The Ophelia approach then utilizes data-driven vignettes (case studies derived from HLQ data) to illustrate and convey the HL needs of the target population. This approach has been successful in the co-design of ideas to enhance the HL assets, responsiveness, and outcomes in numerous settings . Given the international literature above highlights that communities are experiencing significant HL challenges health promotion efforts must be cognisant of HL in their design, the concept of a Health Literacy Mediator (HLM) has been inspired by the Marmot Review: Fair Society, Healthy Lives , which highlighted the success of local health trainers and community champions in empowering individuals to manage their health. Similar roles have already been explored in Eastern Europe, for example, health mediators have been effective in bridging healthcare access for the Roma communities (Roma Health Mediators Project), and in Hungary, the integration of health mediators as part of multidisciplinary teams has shown success in addressing complex health needs and building trust . Furthermore, various health-support roles such as health navigators, health connectors, health coaches, and health advocates have emerged internationally, reflecting a growing focus on improving, adapting, and developing HL practices. For example, health navigators, also known as patient navigators or care coordinators, help individuals overcome barriers to care by connecting them with healthcare providers and community resources . Health coaches use evidence-based strategies and techniques, such as motivational interviewing, to support patients in achieving health goals and integrating healthy habits into their lives . Health advocates provide case management-like support, helping individuals ask questions and navigate the complexities of healthcare systems . Health connectors focus on addressing inequities by building social support networks for individuals and carers . Building on these foundational ideas, the current research team has expanded and formalized the new conception of the HLM role to address the specific needs and context of the Tasmanian community. , defined an HLM as ‘a person or group of people dedicated to providing learning experiences and opportunities to enable individuals and communities to overcome inequities perpetuated by their social determinants and increase their HL assets to improve their health outcomes’. This definition of the role indicates a holistic approach to supporting an individual’s healthcare journey, wherein there is a significant focus upon building autonomous capacity for all individuals, addressing local health inequities, and targeting those disadvantaged by their SDH . The HLM role aims to improve comprehensive HL, beyond just healthcare access, and actively engage in health promotion with individuals, organizations, and policy-makers in the local community. Community expectations of healthcare typically encompass accessible, affordable, and high-quality services . Additionally, communities desire healthcare systems that are culturally sensitive, equitable, and inclusive, ensuring that all individuals, regardless of background or socioeconomic status, receive adequate care . There is also an expectation for healthcare to be proactive in promoting health and preventing diseases through education and community-based interventions . Investigating the role of HLMs is important due to their potential to impact an individual’s or community’s health outcomes positively. HLMs could bridge the gap between healthcare providers and patients, ensuring that individuals understand health information and can utilize and navigate the healthcare system effectively. This is particularly important for managing and preventing NCDs, which require ongoing patient engagement and self-management. By improving HL assets, HLMs could empower individuals to make informed health decisions, adhere to treatment plans, and adopt healthier lifestyles. This empowerment may lead to better management of chronic conditions, reduced hospital readmissions, and overall improved health outcomes . Moreover, HLMs could play a pivotal role in addressing health inequities by targeting interventions towards disadvantaged populations, thus ensuring that HL improvements are inclusive and equitable . This is why this study aims to co-design the emerging HLM role with various stakeholders working in health and health-related settings across diverse Tasmanian regions. This will be achieved by assessing the support, expectations, and need for such a role via online workshops. Health inequities are systematic differences in health status among individual population groups, influenced by factors such as the social determinants of health (SDH) . These disparities stem from historical and contemporary inequities shaped by societal structures and unequal distribution of power and resources . They negatively impact some individuals and societies, leading to poor health outcomes, economic costs, and social disparities . Health inequities are closely linked to non-communicable diseases (NCDs) . Globally, NCDs are a significant health concern, responsible for over 40 million deaths annually, with cardiovascular disease being the leading cause, followed by cancers, respiratory diseases, and diabetes . These diseases are preventable and often linked to modifiable lifestyle factors such as smoking, alcohol use, physical inactivity, and unhealthy diets . Addressing NCDs and health inequities involves coordinated national and international action, focusing on modifiable risk factors, improving access to high-quality chronic care management, and understanding root causes (such as health literacy [HL]) to then inform policies that reduce these disparities and meet the needs of the population, and especially vulnerable groups within it . HL plays a crucial role in addressing both health inequity and NCDs by empowering individuals to understand health information, make informed decisions, and engage in self-management. Efforts to improve HL, both traditional and digital, are essential for promoting better health outcomes and reducing the burden of NCDs globally . The current state of HL within the Australian population reveals both strengths and challenges . People with greater HL challenges often experience adverse health outcomes, increased hospitalizations, and poorer health behaviours than those with fewer such challenges . Improving the HL environment through effective communication strategies, embedding HL into policies, and ensuring accessible information can enhance health outcomes and quality of care . Knowing more about an individual’s and a community’s HL provides an important foundation when creating strategies to strengthen or maintain HL assets. HL assets can refer to the skills, knowledge, and resources individuals and communities possess to access, understand, appraise, and use health information effectively; these assets are vital as they empower people to make informed decisions, navigate healthcare systems, and engage in health-promoting behaviours . Whether an individual has the required HL assets required to manage their health, may reflect on their HL strengths and challenges. This paper will investigate a new role focused on creating HL learning opportunities within a community, evaluating the support, expectations, and requirements for this role to inform future implementation. Enhancing HL assets can lead to better health outcomes, bolster health promotion initiatives, and improve overall well-being. Health promotion and HL are distinct yet complementary concepts that together can contribute to improve overall health outcomes. Health promotion focuses on enabling individuals and communities to increase control over and improve their health through broad actions aimed at addressing social, environmental, and individual factors . This includes implementing policies, providing education, and creating supportive environments that facilitate healthier choices . In contrast, HL refers to an individual’s capacity to obtain, process, and understand basic health information needed to make appropriate health decisions . It encompasses people’s knowledge, motivation, and competences to access, understand, appraise, and apply health information effectively . While these concepts differ in their scope and focus, they complement each other in several ways. HL serves as a foundation for effective health promotion, as individuals with stronger HL assets are better equipped to engage with and benefit from health promotion activities . Conversely, health promotion efforts often aim to improve HL as one of their outcomes, enhancing people’s health knowledge and skills through various educational initiatives. Both concepts share the ultimate goal of empowering individuals and communities to take control of their health and are critical for addressing health inequities and achieving broader health and development goals . HL can be viewed as both an outcome of health promotion efforts and a tool that enables further health promotion . In essence, while health promotion provides broader strategies and actions to improve health, HL equips individuals and communities with the skills to effectively engage with these efforts and make informed health decisions. Together, they can create a more comprehensive approach to improving population health. Co-designed and community-led health promotion interventions have gained recognition as an effective strategy for addressing complex health issues while ensuring cultural appropriateness and local relevance. This collaborative approach involves engaging community members, researchers, policy-makers, and other stakeholders throughout the development and implementation of health initiatives . By embracing co-design, health promotion efforts can better address health inequities, enhance cultural competence, and lead to more effective and sustainable health outcomes . Additionally, these approaches consider varying levels of HL within communities, making information and interventions accessible and understandable to all . International groups such as the WHO promote using a co-design process to co-design HL solutions . An example of this is Optimizing Health Literacy and Access (Ophelia) process, which aims to improve HL and equitable access to healthcare by implementing locally tailored, evidence-informed solutions in collaboration with communities and stakeholders . This approach begins by assessing the HL requirements of the intended population using the Health Literacy Questionnaire (HLQ) . The HLQ was created to capture the multi-dimensional nature of HL . The Ophelia approach then utilizes data-driven vignettes (case studies derived from HLQ data) to illustrate and convey the HL needs of the target population. This approach has been successful in the co-design of ideas to enhance the HL assets, responsiveness, and outcomes in numerous settings . Given the international literature above highlights that communities are experiencing significant HL challenges health promotion efforts must be cognisant of HL in their design, the concept of a Health Literacy Mediator (HLM) has been inspired by the Marmot Review: Fair Society, Healthy Lives , which highlighted the success of local health trainers and community champions in empowering individuals to manage their health. Similar roles have already been explored in Eastern Europe, for example, health mediators have been effective in bridging healthcare access for the Roma communities (Roma Health Mediators Project), and in Hungary, the integration of health mediators as part of multidisciplinary teams has shown success in addressing complex health needs and building trust . Furthermore, various health-support roles such as health navigators, health connectors, health coaches, and health advocates have emerged internationally, reflecting a growing focus on improving, adapting, and developing HL practices. For example, health navigators, also known as patient navigators or care coordinators, help individuals overcome barriers to care by connecting them with healthcare providers and community resources . Health coaches use evidence-based strategies and techniques, such as motivational interviewing, to support patients in achieving health goals and integrating healthy habits into their lives . Health advocates provide case management-like support, helping individuals ask questions and navigate the complexities of healthcare systems . Health connectors focus on addressing inequities by building social support networks for individuals and carers . Building on these foundational ideas, the current research team has expanded and formalized the new conception of the HLM role to address the specific needs and context of the Tasmanian community. , defined an HLM as ‘a person or group of people dedicated to providing learning experiences and opportunities to enable individuals and communities to overcome inequities perpetuated by their social determinants and increase their HL assets to improve their health outcomes’. This definition of the role indicates a holistic approach to supporting an individual’s healthcare journey, wherein there is a significant focus upon building autonomous capacity for all individuals, addressing local health inequities, and targeting those disadvantaged by their SDH . The HLM role aims to improve comprehensive HL, beyond just healthcare access, and actively engage in health promotion with individuals, organizations, and policy-makers in the local community. Community expectations of healthcare typically encompass accessible, affordable, and high-quality services . Additionally, communities desire healthcare systems that are culturally sensitive, equitable, and inclusive, ensuring that all individuals, regardless of background or socioeconomic status, receive adequate care . There is also an expectation for healthcare to be proactive in promoting health and preventing diseases through education and community-based interventions . Investigating the role of HLMs is important due to their potential to impact an individual’s or community’s health outcomes positively. HLMs could bridge the gap between healthcare providers and patients, ensuring that individuals understand health information and can utilize and navigate the healthcare system effectively. This is particularly important for managing and preventing NCDs, which require ongoing patient engagement and self-management. By improving HL assets, HLMs could empower individuals to make informed health decisions, adhere to treatment plans, and adopt healthier lifestyles. This empowerment may lead to better management of chronic conditions, reduced hospital readmissions, and overall improved health outcomes . Moreover, HLMs could play a pivotal role in addressing health inequities by targeting interventions towards disadvantaged populations, thus ensuring that HL improvements are inclusive and equitable . This is why this study aims to co-design the emerging HLM role with various stakeholders working in health and health-related settings across diverse Tasmanian regions. This will be achieved by assessing the support, expectations, and need for such a role via online workshops. A collaborative constructivist approach was employed in this research. This approach was selected to explore and co-construct meaning from the data through active collaboration amongst researchers and participants . The project received ethics approval from the University of Tasmania Research Ethics Committee (Approval Number H0026170). All participants were required to read an information sheet and give electronic and verbal consent prior to admission to the interview, they were aware that whilst within the workshop they were not anonymous to each other, but any data gathered during the discussion would be de-identified. Participants and recruitment The study setting for this research was Tasmania, Australia. HL levels vary across Australia, with Tasmania experiencing some of the lowest health and educational outcomes, as highlighted by the Optimising Health Care for Tasmanians Report, which underscore the state’s challenges in addressing preventable chronic diseases, socioeconomic disadvantage, and educational attainment . Due to the online nature of the workshop participants could be any within the state and still partake. Public health professionals, healthcare providers, managers, and allied health professionals working in Tasmania’s health sector were recruited through purposive and snowball sampling methods . Recruitment occurred via the Tasmanian Health Literacy Network, the Tasmanian Health Department, and the research team’s professional networks. These stakeholders were selected for their relevant knowledge, experience, and interest in the project. Initial contact was made through an email from the research team, which included a brief study description and a registration form for participation in the co-design workshops. Interested individuals were then sent detailed participant information sheets and consent forms. The research team also encouraged these stakeholders to disseminate the workshop details within their own professional networks to increase participation. A total of 15 stakeholders participated and chose one of two identical workshops to attend (Workshop 1, n = 8, Workshop 2, n = 7). The stakeholders represented a range of different sectors including the Department of Health, the University of Tasmania, and not-for-profit organizations, as summarized in . Participants were from all around the state with nine from Southern Tasmania, four from Northern Tasmania, and two from the Northwest Coast. The majority of the participants who took part in the workshops were women ( n = 14). Data collection Consistent with the Ophelia approach, the data for this phase of the research project were gathered from focus group discussions within online workshops. Gathering data through co-design workshops aligns with one of the steps within the Ophelia process, where stakeholders collaboratively design tailored HL interventions based on identified community needs . Two online workshops were conducted in March 2023 on Microsoft Teams, a conferencing software . They went for approximately 1 hour each, first starting with an introduction to the overall project and an overview of the HLQ survey results from previous phases of the research project . Following this, data-informed vignettes were shared with the group. These vignettes were created specifically for this study from a cluster analysis of the HLQ data ( n = 255) and interview data ( n = 14) representing the HL strengths and challenges of the target population . The following questions were presented with the aim of generating discussion to identify local solutions that could respond to the needs of the individuals and families personified in the vignette(s). The questions were: Do you know anyone like this individual or this family in your community? What are the main barriers that this individual/family is facing? What can be done to help this individual/family? How might an HLM assist in these solutions? Should an HLM role be an extension of what already exists or a new role? Would improving HL assets from an earlier age impact these situations? Participants were encouraged to use their microphones and the chat function to contribute to discussions and share ideas during the workshop. Both the workshops were digitally recorded, with consent obtained from all participants beforehand. MS conducted all the workshops, with RN serving as the co-facilitator. Throughout and at the conclusion of each workshop, both the facilitator and co-facilitator made observational notes on the discussions that had occurred. Data analysis For the analysis of the data in this qualitative study, a thematic analysis was employed, as described by . This method was used to provide insights into how the key stakeholders who participated in the workshops conceptualize an HLM from within their specific contexts. This single thematic analysis involved six distinct phases as outlined by described by . Initially, for Phase One, MS and IC (student researcher) collaboratively immersed themselves in the data. This familiarization process included transcribing the workshop discussions verbatim via auto-generation within the Microsoft Teams software and combining that with the researchers’ observational notes and comments that participants had noted in the chat box. It also involved repeatedly listening to audio recordings and thoroughly re-reading the transcripts. Key information from each transcript was highlighted and systematically recorded in an Excel spreadsheet. Subsequently, for Phase Two, IC developed codes. An inductive approach was adopted, beginning with real-world observations, identifying patterns, and formulating theories based on these patterns. The coding process was repeated multiple times, focusing on the data while considering the influence of prior knowledge from earlier readings on the topic. As the analysis progressed and entered Phase Three, IC conducted a theme search, using the codes as foundational elements to the group and refine them into preliminary themes. These initial themes were reviewed and discussed with MS, ensuring that the most pertinent points were captured and aligned with the research objectives as per Phases Four and Five. The final step, Phase Six, involved gathering all qualitative responses, revising the original themes through discussions amongst all authors, and refining and defining the themes to be reported. Through this, one thematic analysis of the workshop discussion’s multiple themes was identified, the themes are reported in and , utilizing a contemporary ‘infographic’ structure, to display the findings. Included example quotes were selected for their clarity and precision in reflecting one of the defined themes, although other participants provided similar responses . The study setting for this research was Tasmania, Australia. HL levels vary across Australia, with Tasmania experiencing some of the lowest health and educational outcomes, as highlighted by the Optimising Health Care for Tasmanians Report, which underscore the state’s challenges in addressing preventable chronic diseases, socioeconomic disadvantage, and educational attainment . Due to the online nature of the workshop participants could be any within the state and still partake. Public health professionals, healthcare providers, managers, and allied health professionals working in Tasmania’s health sector were recruited through purposive and snowball sampling methods . Recruitment occurred via the Tasmanian Health Literacy Network, the Tasmanian Health Department, and the research team’s professional networks. These stakeholders were selected for their relevant knowledge, experience, and interest in the project. Initial contact was made through an email from the research team, which included a brief study description and a registration form for participation in the co-design workshops. Interested individuals were then sent detailed participant information sheets and consent forms. The research team also encouraged these stakeholders to disseminate the workshop details within their own professional networks to increase participation. A total of 15 stakeholders participated and chose one of two identical workshops to attend (Workshop 1, n = 8, Workshop 2, n = 7). The stakeholders represented a range of different sectors including the Department of Health, the University of Tasmania, and not-for-profit organizations, as summarized in . Participants were from all around the state with nine from Southern Tasmania, four from Northern Tasmania, and two from the Northwest Coast. The majority of the participants who took part in the workshops were women ( n = 14). Consistent with the Ophelia approach, the data for this phase of the research project were gathered from focus group discussions within online workshops. Gathering data through co-design workshops aligns with one of the steps within the Ophelia process, where stakeholders collaboratively design tailored HL interventions based on identified community needs . Two online workshops were conducted in March 2023 on Microsoft Teams, a conferencing software . They went for approximately 1 hour each, first starting with an introduction to the overall project and an overview of the HLQ survey results from previous phases of the research project . Following this, data-informed vignettes were shared with the group. These vignettes were created specifically for this study from a cluster analysis of the HLQ data ( n = 255) and interview data ( n = 14) representing the HL strengths and challenges of the target population . The following questions were presented with the aim of generating discussion to identify local solutions that could respond to the needs of the individuals and families personified in the vignette(s). The questions were: Do you know anyone like this individual or this family in your community? What are the main barriers that this individual/family is facing? What can be done to help this individual/family? How might an HLM assist in these solutions? Should an HLM role be an extension of what already exists or a new role? Would improving HL assets from an earlier age impact these situations? Participants were encouraged to use their microphones and the chat function to contribute to discussions and share ideas during the workshop. Both the workshops were digitally recorded, with consent obtained from all participants beforehand. MS conducted all the workshops, with RN serving as the co-facilitator. Throughout and at the conclusion of each workshop, both the facilitator and co-facilitator made observational notes on the discussions that had occurred. For the analysis of the data in this qualitative study, a thematic analysis was employed, as described by . This method was used to provide insights into how the key stakeholders who participated in the workshops conceptualize an HLM from within their specific contexts. This single thematic analysis involved six distinct phases as outlined by described by . Initially, for Phase One, MS and IC (student researcher) collaboratively immersed themselves in the data. This familiarization process included transcribing the workshop discussions verbatim via auto-generation within the Microsoft Teams software and combining that with the researchers’ observational notes and comments that participants had noted in the chat box. It also involved repeatedly listening to audio recordings and thoroughly re-reading the transcripts. Key information from each transcript was highlighted and systematically recorded in an Excel spreadsheet. Subsequently, for Phase Two, IC developed codes. An inductive approach was adopted, beginning with real-world observations, identifying patterns, and formulating theories based on these patterns. The coding process was repeated multiple times, focusing on the data while considering the influence of prior knowledge from earlier readings on the topic. As the analysis progressed and entered Phase Three, IC conducted a theme search, using the codes as foundational elements to the group and refine them into preliminary themes. These initial themes were reviewed and discussed with MS, ensuring that the most pertinent points were captured and aligned with the research objectives as per Phases Four and Five. The final step, Phase Six, involved gathering all qualitative responses, revising the original themes through discussions amongst all authors, and refining and defining the themes to be reported. Through this, one thematic analysis of the workshop discussion’s multiple themes was identified, the themes are reported in and , utilizing a contemporary ‘infographic’ structure, to display the findings. Included example quotes were selected for their clarity and precision in reflecting one of the defined themes, although other participants provided similar responses . All of the participants could relate to the families presented in each of the cases, with multiple comments identifying how realistic the scenarios were for people living within their community. For example: I could identify with this case study (vignette). We all experience our own health issues and those of our family members from time to time. And sometimes it can be very challenging to find the time and know where to go to get the help you need in that moment. So, I think it’s actually quite a common problem for anyone that has to engage with the health system— Participant 2. Discussions led to the key stakeholders voicing their thoughts and concerns with the current healthcare system which then allowed us to identify barriers that the individuals and their families in each vignette were facing. Discussion then moved into how the emerging HLM role may assist in overcoming these issues. All stakeholders voiced their opinions on what the expectation of the HLM role should be and how they could see the role helping their own communities. Barriers to healthcare Through analysis of stakeholder discussions, four major themes emerged that encapsulate the barriers to healthcare within the vignettes presented: Theme One: Time Both individual and systemic challenges in the time to access healthcare exist. On an individual level, limited time due to work, caregiving, and other responsibilities often causes healthcare to be deprioritized. Systemically, long waiting times, limited after-hours services, and delays in accessing specialists or diagnostic tests exacerbate these issues. These barriers are particularly pronounced in rural or regional areas, where healthcare requires additional travel time. Theme Two: Navigating and understanding healthcare Individuals often face challenges in understanding the available options when seeking healthcare, which can make it difficult to ask the right questions to obtain appropriate care. This issue is exacerbated by limited services throughout healthcare, particularly in regional areas. Theme Three: Access to the right healthcare It is crucial for individuals to find the right healthcare provider who can offer the necessary information to manage their individual or family’s healthcare needs. Access to care is further altered by financial and social resources. Theme Four: Expectation of healthcare Individuals expect to be listened to by healthcare professionals. However, when they feel that this is not happening, they are put off and become disengaged with the system. Additionally, societal attitudes can play a role in shaping what is considered socially acceptable regarding illness and chronic disease. This can alter both how healthcare professionals discuss the topics but also how individuals seek help for themselves. For each of the themes that were identified, there were a number of subthemes that related to the perceived barriers to healthcare. The themes, subthemes, and some example quotes can be seen in . Co-design of the HLM position Building on the identified barriers, discussions then transitioned to how an HLM could play a transformative role in mitigating these challenges and improving overall HL. Stakeholders emphasized four primary expectations for the HLM role: Expectation One: Solution-focused The HLM role should be solution-focused, using their understanding of individual and community barriers to address healthcare challenges. Stakeholders emphasized that the HLM should empower individuals, families, and communities by providing essential learning opportunities. A key responsibility of the HLM would be improving the efficiency of both individuals’ and healthcare organizations’ time. By streamlining processes and facilitating access to resources, the HLM could help people navigate the complex healthcare system more easily. As a reliable point of contact for health-related inquiries, the HLM would reduce confusion and simplify access to information. Additionally, the HLM could advocate for flexible, community-tailored healthcare solutions. By understanding the specific needs of the population, they could help design strategies that are culturally sensitive and effective. The presence of a trusted community member in this role fosters trust and reliability. Equitable healthcare is central to the HLM’s role, ensuring that all individuals, regardless of background, have equal access to healthcare resources, and are empowered to make informed decisions. By addressing health inequities driven by SDH, the HLM can promote a more inclusive healthcare environment. Expectation Two: Duty to facilitate change An HLM should play a key role in connecting individuals, families, and communities with healthcare services, bridging gaps between people, healthcare providers, and community organizations. By fostering open communication, the HLM would create safe spaces where individuals feel heard and empowered to ask questions, and making informed health decisions. Their role would involve building trusting relationships and advocating for inclusive, respectful care that addresses the diverse needs of the community. Their role could then extend beyond mere advocacy; they could provide wrap-around support by possessing in-depth knowledge of the healthcare system, actively listening to community members’ problems, strengths, and queries, and working to break down existing barriers to healthcare access. Expectation Three: Community-based role The key stakeholders wanted to see an HLM as someone in a community who serves as a crucial link between individuals and the healthcare system, enhancing the community’s overall HL. This position could be effectively filled by expanding the responsibilities of existing roles such as nurses, school nurses, teachers, or more specialized positions like migrant health workers and Aboriginal health workers. Building the capacity of these individuals could allow them to step into the role of HLM, utilizing their existing trust and presence within the community. Additionally, larger organizations such as state libraries, universities, and not-for-profits could support these mediators through outreach initiatives, providing resources and training to enhance their effectiveness. The community-based nature of this role would ensure greater success and sustainability, as the HLM could tailor their approaches to the specific needs and cultural contexts of their communities. Expectation Four: Targeted position to have the most impact The final expectation was that an HLM could play a pivotal role in enhancing HL by intervening early in an individual’s life and be available to individuals during their childhood education years. By providing guidance and education before health needs arise, HLM would help foster a deeper understanding of health-related issues. This proactive approach could be particularly effective in settings such as schools, youth groups, and teenage-specific programmes, where young people are developing independence and forming lifelong habits. By equipping these individuals with the necessary HL skills, they can become informed decision-makers, capable of navigating the healthcare system. Furthermore, the stakeholders identified that as these individuals share their knowledge within their networks, a ripple effect occurs, leading to a more health-literate community overall. Discussion around how the role could impact the individuals and families presented in the vignettes not only produced four strong expectations above, but also produced other recommendations and supporting ideas for the role. These can be visualized in . Through analysis of stakeholder discussions, four major themes emerged that encapsulate the barriers to healthcare within the vignettes presented: Theme One: Time Both individual and systemic challenges in the time to access healthcare exist. On an individual level, limited time due to work, caregiving, and other responsibilities often causes healthcare to be deprioritized. Systemically, long waiting times, limited after-hours services, and delays in accessing specialists or diagnostic tests exacerbate these issues. These barriers are particularly pronounced in rural or regional areas, where healthcare requires additional travel time. Theme Two: Navigating and understanding healthcare Individuals often face challenges in understanding the available options when seeking healthcare, which can make it difficult to ask the right questions to obtain appropriate care. This issue is exacerbated by limited services throughout healthcare, particularly in regional areas. Theme Three: Access to the right healthcare It is crucial for individuals to find the right healthcare provider who can offer the necessary information to manage their individual or family’s healthcare needs. Access to care is further altered by financial and social resources. Theme Four: Expectation of healthcare Individuals expect to be listened to by healthcare professionals. However, when they feel that this is not happening, they are put off and become disengaged with the system. Additionally, societal attitudes can play a role in shaping what is considered socially acceptable regarding illness and chronic disease. This can alter both how healthcare professionals discuss the topics but also how individuals seek help for themselves. For each of the themes that were identified, there were a number of subthemes that related to the perceived barriers to healthcare. The themes, subthemes, and some example quotes can be seen in . Both individual and systemic challenges in the time to access healthcare exist. On an individual level, limited time due to work, caregiving, and other responsibilities often causes healthcare to be deprioritized. Systemically, long waiting times, limited after-hours services, and delays in accessing specialists or diagnostic tests exacerbate these issues. These barriers are particularly pronounced in rural or regional areas, where healthcare requires additional travel time. Individuals often face challenges in understanding the available options when seeking healthcare, which can make it difficult to ask the right questions to obtain appropriate care. This issue is exacerbated by limited services throughout healthcare, particularly in regional areas. It is crucial for individuals to find the right healthcare provider who can offer the necessary information to manage their individual or family’s healthcare needs. Access to care is further altered by financial and social resources. Individuals expect to be listened to by healthcare professionals. However, when they feel that this is not happening, they are put off and become disengaged with the system. Additionally, societal attitudes can play a role in shaping what is considered socially acceptable regarding illness and chronic disease. This can alter both how healthcare professionals discuss the topics but also how individuals seek help for themselves. For each of the themes that were identified, there were a number of subthemes that related to the perceived barriers to healthcare. The themes, subthemes, and some example quotes can be seen in . Building on the identified barriers, discussions then transitioned to how an HLM could play a transformative role in mitigating these challenges and improving overall HL. Stakeholders emphasized four primary expectations for the HLM role: Expectation One: Solution-focused The HLM role should be solution-focused, using their understanding of individual and community barriers to address healthcare challenges. Stakeholders emphasized that the HLM should empower individuals, families, and communities by providing essential learning opportunities. A key responsibility of the HLM would be improving the efficiency of both individuals’ and healthcare organizations’ time. By streamlining processes and facilitating access to resources, the HLM could help people navigate the complex healthcare system more easily. As a reliable point of contact for health-related inquiries, the HLM would reduce confusion and simplify access to information. Additionally, the HLM could advocate for flexible, community-tailored healthcare solutions. By understanding the specific needs of the population, they could help design strategies that are culturally sensitive and effective. The presence of a trusted community member in this role fosters trust and reliability. Equitable healthcare is central to the HLM’s role, ensuring that all individuals, regardless of background, have equal access to healthcare resources, and are empowered to make informed decisions. By addressing health inequities driven by SDH, the HLM can promote a more inclusive healthcare environment. Expectation Two: Duty to facilitate change An HLM should play a key role in connecting individuals, families, and communities with healthcare services, bridging gaps between people, healthcare providers, and community organizations. By fostering open communication, the HLM would create safe spaces where individuals feel heard and empowered to ask questions, and making informed health decisions. Their role would involve building trusting relationships and advocating for inclusive, respectful care that addresses the diverse needs of the community. Their role could then extend beyond mere advocacy; they could provide wrap-around support by possessing in-depth knowledge of the healthcare system, actively listening to community members’ problems, strengths, and queries, and working to break down existing barriers to healthcare access. Expectation Three: Community-based role The key stakeholders wanted to see an HLM as someone in a community who serves as a crucial link between individuals and the healthcare system, enhancing the community’s overall HL. This position could be effectively filled by expanding the responsibilities of existing roles such as nurses, school nurses, teachers, or more specialized positions like migrant health workers and Aboriginal health workers. Building the capacity of these individuals could allow them to step into the role of HLM, utilizing their existing trust and presence within the community. Additionally, larger organizations such as state libraries, universities, and not-for-profits could support these mediators through outreach initiatives, providing resources and training to enhance their effectiveness. The community-based nature of this role would ensure greater success and sustainability, as the HLM could tailor their approaches to the specific needs and cultural contexts of their communities. Expectation Four: Targeted position to have the most impact The final expectation was that an HLM could play a pivotal role in enhancing HL by intervening early in an individual’s life and be available to individuals during their childhood education years. By providing guidance and education before health needs arise, HLM would help foster a deeper understanding of health-related issues. This proactive approach could be particularly effective in settings such as schools, youth groups, and teenage-specific programmes, where young people are developing independence and forming lifelong habits. By equipping these individuals with the necessary HL skills, they can become informed decision-makers, capable of navigating the healthcare system. Furthermore, the stakeholders identified that as these individuals share their knowledge within their networks, a ripple effect occurs, leading to a more health-literate community overall. Discussion around how the role could impact the individuals and families presented in the vignettes not only produced four strong expectations above, but also produced other recommendations and supporting ideas for the role. These can be visualized in . The HLM role should be solution-focused, using their understanding of individual and community barriers to address healthcare challenges. Stakeholders emphasized that the HLM should empower individuals, families, and communities by providing essential learning opportunities. A key responsibility of the HLM would be improving the efficiency of both individuals’ and healthcare organizations’ time. By streamlining processes and facilitating access to resources, the HLM could help people navigate the complex healthcare system more easily. As a reliable point of contact for health-related inquiries, the HLM would reduce confusion and simplify access to information. Additionally, the HLM could advocate for flexible, community-tailored healthcare solutions. By understanding the specific needs of the population, they could help design strategies that are culturally sensitive and effective. The presence of a trusted community member in this role fosters trust and reliability. Equitable healthcare is central to the HLM’s role, ensuring that all individuals, regardless of background, have equal access to healthcare resources, and are empowered to make informed decisions. By addressing health inequities driven by SDH, the HLM can promote a more inclusive healthcare environment. An HLM should play a key role in connecting individuals, families, and communities with healthcare services, bridging gaps between people, healthcare providers, and community organizations. By fostering open communication, the HLM would create safe spaces where individuals feel heard and empowered to ask questions, and making informed health decisions. Their role would involve building trusting relationships and advocating for inclusive, respectful care that addresses the diverse needs of the community. Their role could then extend beyond mere advocacy; they could provide wrap-around support by possessing in-depth knowledge of the healthcare system, actively listening to community members’ problems, strengths, and queries, and working to break down existing barriers to healthcare access. The key stakeholders wanted to see an HLM as someone in a community who serves as a crucial link between individuals and the healthcare system, enhancing the community’s overall HL. This position could be effectively filled by expanding the responsibilities of existing roles such as nurses, school nurses, teachers, or more specialized positions like migrant health workers and Aboriginal health workers. Building the capacity of these individuals could allow them to step into the role of HLM, utilizing their existing trust and presence within the community. Additionally, larger organizations such as state libraries, universities, and not-for-profits could support these mediators through outreach initiatives, providing resources and training to enhance their effectiveness. The community-based nature of this role would ensure greater success and sustainability, as the HLM could tailor their approaches to the specific needs and cultural contexts of their communities. The final expectation was that an HLM could play a pivotal role in enhancing HL by intervening early in an individual’s life and be available to individuals during their childhood education years. By providing guidance and education before health needs arise, HLM would help foster a deeper understanding of health-related issues. This proactive approach could be particularly effective in settings such as schools, youth groups, and teenage-specific programmes, where young people are developing independence and forming lifelong habits. By equipping these individuals with the necessary HL skills, they can become informed decision-makers, capable of navigating the healthcare system. Furthermore, the stakeholders identified that as these individuals share their knowledge within their networks, a ripple effect occurs, leading to a more health-literate community overall. Discussion around how the role could impact the individuals and families presented in the vignettes not only produced four strong expectations above, but also produced other recommendations and supporting ideas for the role. These can be visualized in . This study aimed to co-design the emerging HLM role for the Tasmanian community and to assess the support, expectations, and need for such a role to then help guide the implementation in the future. This study demonstrates how considering a community’s current HL and engaging key stakeholders (those working in Tasmania’s health sector, with relevant knowledge, experience, and interest in HL and improving the health of their community) in the planning and design of new public health solutions can support the development of a fit-for-purpose, context-specific role. Experiences with similar roles in other parts of the world highlight valuable lessons for the development and sustainability of HLMs. These experiences underscore the importance of incorporating community-specific knowledge and adapting roles to local contexts. In Romania, the Roma Health Mediators initiative serves as a notable case where health mediators have strengthened connections between marginalized communities and health services . This programme has demonstrated that the success of such roles often depends on robust training, community acceptance, and ongoing support. However, these programmes also demonstrate that while community-based health workers can enhance trust and access, they also face challenges related to sustainability, training, and funding. These insights could inform the design and implementation of HLMs in Tasmania, emphasizing the need for a strong framework that considers the social and cultural factors unique to each community. The HLM role could then go beyond navigation and connection to acknowledge the SDH and could enhance individuals’ HL assets to improve their health outcomes. Addressing SDH through an HLM could play a crucial role in reducing health inequities and improving health outcomes. Social determinants, such as socioeconomic status, education, and living conditions, significantly influence health outcomes . By focusing on these non-medical factors, HLMs can help bridge the gap between healthcare access and the broader social environment that affects individual and community health. HLMs could play a vital role in creating equitable health opportunities by helping to empower individuals with knowledge and resources to navigate their social contexts effectively. This approach not only addresses immediate health needs but also tackles the root causes of health disparities, promoting long-term health equity . The co-design of the HLM position brings us a step closer to developing an HL-responsive role in Tasmania. This may play a crucial part in improving the HL and health outcomes for the community. The co-designed workshops generated a number of important concepts, from the expectations of the HLM, recommendations for the HLM role, and finally other ideas that would be important for the HLM position’s development. In order to be successful and sustainable, the role of HLMs must be assessed within Tasmania’s current healthcare landscape. Existing gaps in HL and accessibility indicate where HLMs could be most impactful. HLMs could collaborate with health navigators, social workers, and other healthcare professionals to complement rather than duplicate efforts, enhancing coordination and resource use. However, barriers such as funding, training needs, and acceptance within the community need to be considered and addressed to facilitate effective integration. Addressing expectations An HLM could significantly enhance health outcomes by addressing critical areas such as time management, navigation, right care, and community trust. These issues have been recognized in previous papers and give researchers and policy-makers a starting point when creating practical solutions to address the inequity that exists within health and healthcare . By optimizing time utilization, HLMs may assist both individuals and organizations to focus on essential health-related tasks without unnecessary delays, thereby improving efficiency and productivity. HLMs could serve as a central resource for health-related inquiries, simplifying navigation by providing a single point of contact, which reduces the complexity often associated with accessing healthcare services . This approach could help to ensure that individuals receive the right care tailored to their specific needs, enhancing the quality of healthcare delivery . Furthermore, HLMs build community trust by being accessible and reliable members who understand local nuances and concerns . Setting clear expectations for HLMs is crucial in aligning stakeholders, ensuring that all parties have a shared understanding of the HLM’s role and responsibilities. This alignment fosters collaboration among individuals, communities, and organizations, which could lead to more coordinated efforts and improved health outcomes. By establishing these expectations from the outset, HLMs could effectively bridge gaps in healthcare delivery and empower communities to overcome barriers related to SDH. HLMs should play a crucial role in creating connections, advocating for individuals, and breaking down barriers within communities. By fostering effective communication, HLMs could help to enable individuals to express themselves and be heard, which is essential for building trust and empowering communities . Advocacy should be a fundamental aspect of HLMs’ duties, as they work to protect and promote the rights of individuals, ensuring that their voices are amplified in health-related discussions . This advocacy involves not only supporting individuals in navigating complex health systems but also pushing for systemic changes that address broader health inequities. Breaking down barriers requires HLMs to listen deeply and provide tailored support that respects the unique needs of each community member. Clear communication is vital in achieving these duties, as it ensures that all stakeholders are aligned and informed about the goals and processes involved . By maintaining open lines of communication, HLMs could effectively coordinate efforts across various sectors, ultimately leading to more inclusive and equitable health outcomes for all community members. Role in community HLMs could significantly expand their roles in various community settings such as state libraries, universities, not-for-profits, and other organizations by integrating with existing roles like nurses, teachers, and health navigators. Libraries serve as accessible hubs for information dissemination, making them ideal partners for HLMs to collaborate with librarians to provide HL resources tailored to a community’s needs . Not-for-profit organizations offer another avenue for HLMs to reach underserved populations by working with health connectors and coaches to deliver targeted interventions. Integrating lessons learned from similar initiatives could help define whether HLMs should be volunteers or professionals and highlight potential challenges. The experiences of health mediators in Eastern Europe, for instance, illustrate that while volunteer-based roles foster community trust, they can suffer from high turnover and inconsistent support . On the other hand, structured, professional approaches, like those in multidisciplinary models, provide stability but can be more costly and require significant investment. In schools and universities, HLMs could work alongside educators to embed HL into curricula, ensuring that students across disciplines develop essential skills for navigating health information . By utilizing the expertise of nurses and teachers who already play pivotal roles in health education, HLMs could create a more cohesive approach to improving HL . This integration could not only enhance the effectiveness of existing programmes but also ensure a comprehensive strategy that addresses the diverse needs of communities, ultimately leading to improved health outcomes and reduced disparities . Timing for impact The HLM role could make a significant impact through early interventions in educational settings and community initiatives. By focusing on schools and educational environments, HLMs could integrate HL into the curriculum, fostering a culture of informed health decision-making from a young age . This early engagement is crucial as it equips students with the necessary skills to navigate health information throughout their lives, thereby reducing health disparities linked to SDH . In community settings, HLMs could initiate programs that address specific local health challenges, tailoring strategies to meet the unique needs of diverse populations. Long-term engagement in these communities is essential to build trust and ensure sustained improvements in HL. Tailored strategies that consider cultural, social, and economic factors are necessary for these interventions to be effective . By maintaining an ongoing presence and adapting approaches based on community feedback, HLMs can ensure that their efforts lead to meaningful and lasting changes in health outcomes. This proactive approach not only empowers individuals but also strengthens community resilience against health inequities. Recommendations and future research In summary, these findings suggest that policy-makers could consider incorporating HLMs into policies as a strategic approach to improving public health outcomes. However, the nature of the HLM role—whether as volunteers or professionals—must be critically examined. Integration of the role may follow a spectrum ranging from volunteerism, enhancing the capabilities of current workers without significant cost, and extending to purposely paid professionals who provide dedicated, consistent, and expert support. This spectrum allows for flexible implementation tailored to community needs and resources, balancing cost, sustainability, and impact. While volunteer HLMs could foster trust and connection within communities due to their grassroots nature, research indicates that volunteerism comes with challenges, including limited availability, high turnover, and inconsistent training . Upskilling existing professionals to take on HLM responsibilities can enhance workforce capacity, improve continuity of care, and provide cost-effective, immediate HL support within the current system. Employing professional HLMs provides more stability and comprehensive expertise but raises concerns regarding sustainability and costs. Ensuring trust between professionals and community members would also need strategic efforts . Future initiatives could consider a hybrid model where HLMs start as trained volunteers with pathways to professional roles, which would balance trust-building with sustainability. Also, each health or community setting may require an assessment of their current resource requirements, existing skills, and capacity building needs prior to introducing an HLM role. This sort of assessment may support the success and sustainability of such interventions. For successful implementation, it would be crucial to develop clear guidelines and training programmes that equip HLMs with the necessary skills and resources. These programmes could outline the specific competencies needed, with adaptations for either volunteer or professional tracks, ensuring that all HLMs are prepared to navigate their roles effectively. Drawing on the successes and challenges experienced by health mediator programmes in Eastern Europe, it is evident that best practices should be tailored to local needs. Programmes like the Roma Health Mediators Project emphasize the importance of sustainable training and support structures, which would be critical for the HLM role in Tasmania. Future research should explore the specific challenges of integrating HLMs into various community contexts, including potential resistance from existing healthcare structures and the need for sustainable funding models. It would be valuable to investigate funding strategies that could support either volunteer or paid HLMs, such as community grants, partnerships with local organizations, or government subsidies. Additionally, evaluating the long-term impact of HLM interventions on health outcomes will be essential in refining their role and maximizing their effectiveness in reducing health disparities. The future of health promotion is poised for transformative impact, emphasizing a holistic approach that integrates education, community engagement, and policy development . A realistic pathway for the evolution of the HLM role could account for resource constraints and community expectations. Informed by this research the research team will develop a clear position description which will include the roles and responsibilities of an HLM to ensure practical implementation. An HLM could play a pivotal role in this evolution by addressing SDH and empowering individuals with the knowledge and skills needed to navigate complex health systems. Establishing trust and credibility within communities will be crucial, whether the HLMs are volunteer-based or part of a professional workforce. As health promotion strategies continue to evolve, they will increasingly focus on creating supportive environments and strengthening community actions . By integrating into diverse settings such as schools, workplaces, and community centres, HLMs may support early interventions and tailor strategies to meet the unique needs of different populations. This proactive approach not only addresses immediate health concerns but could also lay the groundwork for sustained improvements in public health outcomes, ultimately contributing to a healthier, more equitable society. Additionally, informed by this research pilot programmes should be considered to assess the feasibility and impact of different HLM approaches, enabling the identification of the most effective structure for Tasmania. Strengths and limitations This study utilizes a co-design approach, which is a significant strength as it ensures that the perspectives of both users and providers are incorporated into the planning and design of the HLM role. By grounding the co-design process in local knowledge and expertise, the study was able to develop context-specific solutions that are more likely to create a HL-responsive environment tailored to the unique needs of the Tasmanian community. This participatory approach promotes stakeholder buy-in and enhances the relevance of the proposed solutions to the local population. However, there are limitations to this approach. While the co-design process generated expectations, recommendations, and ideas based on stakeholders’ personal knowledge and experiences, it does not provide empirical evidence of the effectiveness or feasibility of the HLM role. Consequently, further research is needed to implement and evaluate this role to determine its potential impact on health and equity outcomes within communities. Additionally, the vignettes used in this study were based on only five scales of the HLQ. This approach, while focused, might have excluded other important HL strengths and challenges, potentially affecting the comprehensiveness of the findings and any future decisions informed by this data. Furthermore, the participant group composition was limited, with only one male participant, which could introduce biases. The findings may therefore have limited generalizability, and caution should be exercised when interpreting or applying these results to other contexts. While conducting online workshops via Microsoft Teams facilitated engagement with diverse stakeholders across Tasmania, it may have inadvertently limited participation from those with restricted access to digital technology or low digital literacy. This constraint highlights a potential barrier to inclusive participation. Future research should consider employing a hybrid model that combines in-person and online engagement options to accommodate stakeholders’ preferences, thereby enhancing participation and ensuring a more comprehensive representation of perspectives and capturing different viewpoints. An HLM could significantly enhance health outcomes by addressing critical areas such as time management, navigation, right care, and community trust. These issues have been recognized in previous papers and give researchers and policy-makers a starting point when creating practical solutions to address the inequity that exists within health and healthcare . By optimizing time utilization, HLMs may assist both individuals and organizations to focus on essential health-related tasks without unnecessary delays, thereby improving efficiency and productivity. HLMs could serve as a central resource for health-related inquiries, simplifying navigation by providing a single point of contact, which reduces the complexity often associated with accessing healthcare services . This approach could help to ensure that individuals receive the right care tailored to their specific needs, enhancing the quality of healthcare delivery . Furthermore, HLMs build community trust by being accessible and reliable members who understand local nuances and concerns . Setting clear expectations for HLMs is crucial in aligning stakeholders, ensuring that all parties have a shared understanding of the HLM’s role and responsibilities. This alignment fosters collaboration among individuals, communities, and organizations, which could lead to more coordinated efforts and improved health outcomes. By establishing these expectations from the outset, HLMs could effectively bridge gaps in healthcare delivery and empower communities to overcome barriers related to SDH. HLMs should play a crucial role in creating connections, advocating for individuals, and breaking down barriers within communities. By fostering effective communication, HLMs could help to enable individuals to express themselves and be heard, which is essential for building trust and empowering communities . Advocacy should be a fundamental aspect of HLMs’ duties, as they work to protect and promote the rights of individuals, ensuring that their voices are amplified in health-related discussions . This advocacy involves not only supporting individuals in navigating complex health systems but also pushing for systemic changes that address broader health inequities. Breaking down barriers requires HLMs to listen deeply and provide tailored support that respects the unique needs of each community member. Clear communication is vital in achieving these duties, as it ensures that all stakeholders are aligned and informed about the goals and processes involved . By maintaining open lines of communication, HLMs could effectively coordinate efforts across various sectors, ultimately leading to more inclusive and equitable health outcomes for all community members. HLMs could significantly expand their roles in various community settings such as state libraries, universities, not-for-profits, and other organizations by integrating with existing roles like nurses, teachers, and health navigators. Libraries serve as accessible hubs for information dissemination, making them ideal partners for HLMs to collaborate with librarians to provide HL resources tailored to a community’s needs . Not-for-profit organizations offer another avenue for HLMs to reach underserved populations by working with health connectors and coaches to deliver targeted interventions. Integrating lessons learned from similar initiatives could help define whether HLMs should be volunteers or professionals and highlight potential challenges. The experiences of health mediators in Eastern Europe, for instance, illustrate that while volunteer-based roles foster community trust, they can suffer from high turnover and inconsistent support . On the other hand, structured, professional approaches, like those in multidisciplinary models, provide stability but can be more costly and require significant investment. In schools and universities, HLMs could work alongside educators to embed HL into curricula, ensuring that students across disciplines develop essential skills for navigating health information . By utilizing the expertise of nurses and teachers who already play pivotal roles in health education, HLMs could create a more cohesive approach to improving HL . This integration could not only enhance the effectiveness of existing programmes but also ensure a comprehensive strategy that addresses the diverse needs of communities, ultimately leading to improved health outcomes and reduced disparities . The HLM role could make a significant impact through early interventions in educational settings and community initiatives. By focusing on schools and educational environments, HLMs could integrate HL into the curriculum, fostering a culture of informed health decision-making from a young age . This early engagement is crucial as it equips students with the necessary skills to navigate health information throughout their lives, thereby reducing health disparities linked to SDH . In community settings, HLMs could initiate programs that address specific local health challenges, tailoring strategies to meet the unique needs of diverse populations. Long-term engagement in these communities is essential to build trust and ensure sustained improvements in HL. Tailored strategies that consider cultural, social, and economic factors are necessary for these interventions to be effective . By maintaining an ongoing presence and adapting approaches based on community feedback, HLMs can ensure that their efforts lead to meaningful and lasting changes in health outcomes. This proactive approach not only empowers individuals but also strengthens community resilience against health inequities. In summary, these findings suggest that policy-makers could consider incorporating HLMs into policies as a strategic approach to improving public health outcomes. However, the nature of the HLM role—whether as volunteers or professionals—must be critically examined. Integration of the role may follow a spectrum ranging from volunteerism, enhancing the capabilities of current workers without significant cost, and extending to purposely paid professionals who provide dedicated, consistent, and expert support. This spectrum allows for flexible implementation tailored to community needs and resources, balancing cost, sustainability, and impact. While volunteer HLMs could foster trust and connection within communities due to their grassroots nature, research indicates that volunteerism comes with challenges, including limited availability, high turnover, and inconsistent training . Upskilling existing professionals to take on HLM responsibilities can enhance workforce capacity, improve continuity of care, and provide cost-effective, immediate HL support within the current system. Employing professional HLMs provides more stability and comprehensive expertise but raises concerns regarding sustainability and costs. Ensuring trust between professionals and community members would also need strategic efforts . Future initiatives could consider a hybrid model where HLMs start as trained volunteers with pathways to professional roles, which would balance trust-building with sustainability. Also, each health or community setting may require an assessment of their current resource requirements, existing skills, and capacity building needs prior to introducing an HLM role. This sort of assessment may support the success and sustainability of such interventions. For successful implementation, it would be crucial to develop clear guidelines and training programmes that equip HLMs with the necessary skills and resources. These programmes could outline the specific competencies needed, with adaptations for either volunteer or professional tracks, ensuring that all HLMs are prepared to navigate their roles effectively. Drawing on the successes and challenges experienced by health mediator programmes in Eastern Europe, it is evident that best practices should be tailored to local needs. Programmes like the Roma Health Mediators Project emphasize the importance of sustainable training and support structures, which would be critical for the HLM role in Tasmania. Future research should explore the specific challenges of integrating HLMs into various community contexts, including potential resistance from existing healthcare structures and the need for sustainable funding models. It would be valuable to investigate funding strategies that could support either volunteer or paid HLMs, such as community grants, partnerships with local organizations, or government subsidies. Additionally, evaluating the long-term impact of HLM interventions on health outcomes will be essential in refining their role and maximizing their effectiveness in reducing health disparities. The future of health promotion is poised for transformative impact, emphasizing a holistic approach that integrates education, community engagement, and policy development . A realistic pathway for the evolution of the HLM role could account for resource constraints and community expectations. Informed by this research the research team will develop a clear position description which will include the roles and responsibilities of an HLM to ensure practical implementation. An HLM could play a pivotal role in this evolution by addressing SDH and empowering individuals with the knowledge and skills needed to navigate complex health systems. Establishing trust and credibility within communities will be crucial, whether the HLMs are volunteer-based or part of a professional workforce. As health promotion strategies continue to evolve, they will increasingly focus on creating supportive environments and strengthening community actions . By integrating into diverse settings such as schools, workplaces, and community centres, HLMs may support early interventions and tailor strategies to meet the unique needs of different populations. This proactive approach not only addresses immediate health concerns but could also lay the groundwork for sustained improvements in public health outcomes, ultimately contributing to a healthier, more equitable society. Additionally, informed by this research pilot programmes should be considered to assess the feasibility and impact of different HLM approaches, enabling the identification of the most effective structure for Tasmania. This study utilizes a co-design approach, which is a significant strength as it ensures that the perspectives of both users and providers are incorporated into the planning and design of the HLM role. By grounding the co-design process in local knowledge and expertise, the study was able to develop context-specific solutions that are more likely to create a HL-responsive environment tailored to the unique needs of the Tasmanian community. This participatory approach promotes stakeholder buy-in and enhances the relevance of the proposed solutions to the local population. However, there are limitations to this approach. While the co-design process generated expectations, recommendations, and ideas based on stakeholders’ personal knowledge and experiences, it does not provide empirical evidence of the effectiveness or feasibility of the HLM role. Consequently, further research is needed to implement and evaluate this role to determine its potential impact on health and equity outcomes within communities. Additionally, the vignettes used in this study were based on only five scales of the HLQ. This approach, while focused, might have excluded other important HL strengths and challenges, potentially affecting the comprehensiveness of the findings and any future decisions informed by this data. Furthermore, the participant group composition was limited, with only one male participant, which could introduce biases. The findings may therefore have limited generalizability, and caution should be exercised when interpreting or applying these results to other contexts. While conducting online workshops via Microsoft Teams facilitated engagement with diverse stakeholders across Tasmania, it may have inadvertently limited participation from those with restricted access to digital technology or low digital literacy. This constraint highlights a potential barrier to inclusive participation. Future research should consider employing a hybrid model that combines in-person and online engagement options to accommodate stakeholders’ preferences, thereby enhancing participation and ensuring a more comprehensive representation of perspectives and capturing different viewpoints. In conclusion, the HLM role represents a significant opportunity to address health inequities by enhancing time management, streamlining healthcare navigation, ensuring appropriate care delivery, and fostering community trust. By bridging gaps between individuals and healthcare systems, advocating for equity, and tailoring support to community needs, HLMs could play a transformative role in breaking down barriers to healthcare. Their integration into diverse community settings, such as libraries, schools, and universities, alongside collaborations with existing roles like nurses, teachers, and social workers, underscores their potential to amplify HL efforts. Stakeholders identified key expectations for the HLM role, including its focus on being solution-oriented, community-based, and targeted towards populations with the greatest need. Furthermore, embedding HLMs in educational and early intervention initiatives highlights the importance of long-term engagement and proactive strategies to build a health-literate population. These findings emphasize the need for a structured and sustainable implementation of the HLM role to promote equitable access to health resources and improved public health outcomes. Future research and pilot programmes will be essential to refine this role and evaluate its impact on reducing health disparities. |
The role of the KEAP1-NRF2 signaling pathway in form deprivation myopia guinea pigs | 66f4aa2b-94d9-4767-b1a4-f179afb8b62f | 11566547 | Anatomy[mh] | The etiology of myopia is complicated . Despite significant efforts by many researchers to elucidate the causes of myopia, a comprehensive understanding remains elusive. Previous studies found that oxidative stress (OS) responses and alterations in associated signaling pathways due to hypoxia may contribute to myopia, especially in high myopia . Additionally, recent research indicates that the scleral hypoxia caused by reduced choroidal blood flow perfusion, which influences scleral remodeling by up-regulating the expression of hypoxia-inducible factor-1α (HIF-1α), thereby promoting myopic progression . Due to hypoxia plays a crucial role in inducing OS, which suggests that OS might regulation the myopic progression. Furthermore, the excessive accumulation of HIF-1α leading to the inactivation of prolyl hydroxylase and subsequent stimulation of reactive oxygen species (ROS) released, result ROS and HIF-1α jointly induces OS . These results further suggest that OS was closely connection with the myopic progression, but the role of OS related signaling pathways in myopia remains poorly understood. Superoxide dismutase (SOD) activity was decreased in the retinas of FDM guinea pigs, and antioxidant levels were significantly reduced in the aqueous humor of myopic patients . These findings suggest that OS contributes to reduced antioxidant capacity might play a critical role in myopia development. The Kelch-like ECH-associated protein 1 (KEAP1) - nuclear factor erythroid 2-related factor 2 (NRF2) pathway is a critical regulator of OS, but few studies reports about its role in myopia progression. KEAP1 binds tightly to NRF2, maintaining stable expression in the cytoplasm . Upon the occurrence of OS, tyrosine kinase rapidly facilitates the separation of KEAP1 from NRF2 in the cytoplasm, resulting in KEAP1 degradation. Subsequently, NRF2 translocates into the nucleus and accumulates, where it regulates downstream antioxidant genes, thereby exhibiting an anti-OS function . When OS severity exceeds the antioxidant capacity, tissue damage ensues. Research has shown that reduced SOD activity in the retinas of form deprivation myopia (FDM) guinea pigs may lead to pathological changes such as retinal thinning and structural disorganization . As a downstream target gene activated by NRF2, SOD works to alleviate OS and reduce retinal damage . These findings suggest that activation NRF2 and regulation SOD could enhance retinal antioxidant capabilities and affect myopic progression. Therefore, activating the KEAP1-NRF2 signaling pathway is a vital strategy to regulate SOD and protect the retinas from OS-induced damageIt might become a new intervention method for the myopic progression. This study aims to conduct a preliminary investigation that provides novel insights into the pathogenesis of myopia. This study constructed the FDM guinea pig model to demonstrate the influence of KEAP1, NRF2, and SOD to myopic development. Specifically, alterations in KEAP1-NRF2 signaling in myopia provide indirect evidence that OS contributes to myopia’s development. Activating NRF2 to enhance SOD expression could potentially improve retinal antioxidant capacity and decelerate myopia progression. Animal grouping This research received approval from the Animal Care and Ethics Committee at the North Sichuan Medical College ( NSMC2022036 ). It complied with the Association for Research in Vision and Ophthalmology statement for using animals in ophthalmic and vision research. Guinea pigs aged three weeks weighing 120 g ~ 150 g were selected for the study. After examining the diopter of all guinea pigs, those with congenital myopia were excluded. A total of 45 guinea pigs were retained and randomly assigned to different groups. The blank control group without any treatment was the negative control (NC, n = 15). The experimental group had a white translucent mask covering their right eyes to induce form deprivation myopia (FDM, n = 15), and their left eyes were without intervention as the self-control (SC, n = 15) . The intervention group was treated with tert-butylhydroquinone (TBHQ, HY-100489, USA) (TBHQ, n = 15), which was dissolved in a mixture of normal saline with 10% DMSO and delivered into the animal by intraperitoneal injection (10 mg/kg) for 48 h intervals. All guinea pigs were housed in a 12 h light/12 h dark room for four weeks, and the temperature was maintained at 23 ± 2℃, and the lighting was 500 lx . After four weeks, all animals were humanely euthanized using excessive isoflurane (RWD Life Science Co., R510-22-10, China) inhalation followed by cervical dislocation. Subsequently, the eyeballs were removed for further experiments. Measurement spherical equivalent & axial length The spherical equivalent (SE) and axial length (AL) were examined at five time points: before the covered mask and after being treated with different methods at one week, two weeks, three weeks, and four weeks. The Biological parameters were measured in a darkroom in the morning at 8:00 without the mask. The SE was used at least three times using a streak retinoscopy (66 Vision-Tech Co., China) by two experienced optometrists. The AL is defined as the distance from the anterior cornea center to the retina and was measured using an A-scan ultrasound image (Cinescan, France) by a proficient ophthalmic technician. The AL data were derived from the average of ten measurements for each eye. The mean values of SE and AL were calculated for analysis. Immunohistochemistry Eyeballs were taken from each group and fixed in a 4% paraformaldehyde solution. Then, it is dehydrated with gradient alcohol and embedded in paraffin. Then, the paraffin-embedded tissues were sectioned into 4 μm thick slices, which were subsequently deparaffinized in dimethyl benzene and rehydrated in gradient alcohol. Antigen retrieval was performed using EDTA (pH 8.0, Biosharp, China) for 15 min under heat induction. Sections were incubated in a 3% hydrogen peroxide-methanol solution at 37 °C for 15 min and rinsed thrice with PBS. Blocking was carried out with 3% goat serum for 30 min, followed by overnight incubation at 4 °C with primary antibodies (anti-KEAP1, 1:100, Proteintech, China; anti-NRF2, 1:100, Huabio, China; anti-SOD1, 1:50, Omnimabs, Canada). After incubation, sections were washed three times with PBS and incubated with secondary antibodies (Boster, China) at room temperature for 1 h. Protein immune activity was detected using DAB chromogen (ZSGB-BIO, China). Sections were stained with hematoxylin, differentiated with hydrochloric alcohol, and dehydrated in gradient alcohol. Finally, the sections were mounted using neutral resin and examined for protein distribution under a microscope (Leica, France). Reverse transcription quantitative polymerase chain reaction (RT-qPCR) Retinal samples were pre-prepared for RNA extraction using Trizol. The extraction followed the detailed steps outlined in the provided specifications kit (Takara, Japan). The extracted RNA was reverse transcribed into cDNA and then mixed with SYBR Green II (Takara, Japan) in the wells for amplification using the instrument (Light Cycler480, USA) for Real-time PCR. The amplification protocol was as follows: initial denaturation at 95 °C for 30s, followed by 40 cycles of denaturation at 95 °C for 5s, annealing at 60 °C for 30s, and extension at 97 °C for 1s. The relative mRNA levels of KEAP1, NRF2, and SOD were normalized to the internal control gene GAPDH and calculated using the 2 −ΔΔCT method. The sequences used in this study are shown in Table . Western blot Retinas were isolated from eyeballs and lysed in RIPA buffer (Beyotime Biotechnology, China) containing 1% PMSF (Beyotime Biotechnology, China). Protein extracts were obtained from the retinal supernatant, and their concentrations were determined using a BCA kit (Beyotime Biotechnology, China). The proteins were then mixed with 5× protein loading buffer (Solarbio, China), heated at 90℃ for 10 min, and subjected to 10% SDS-polyacrylamide gel electrophoresis and transferred the proteins to nitrocellulose membranes, which were then cut based on the locations of the target proteins. The NRF2 is located at 100kd, the KEAP1 is located at 70kd, the SOD1 is located at 26kd, and the GAPDH is located at 36kd. Next, the membranes were blocked with 5% nonfat milk at room temperature for 1 h and hybridization overnight at 4℃ with primary antibodies (anti-KEAP1, 1: 2000, Proteintech, China; anti-NRF2, 1: 1000, Huabio, China; anti-SOD1,1: 1000, WanleiBio, China; and anti-GAPDH, 1: 5000, Huabio, China). After incubation with mouse anti-rabbit secondary antibody (1: 10000; Boster, China) and incubated for one hour on a shaker, the membranes were developed using ECL developer solution (Biosharp, China) and imaged via a chemiluminescence system (Vilber Lourmat, France). Densitometry was quantified using Image J software. Statistical analyses All data are presented as mean ± standard deviation (SD) and were analyzed using SPSS software, version 28.0. An unpaired Student’s t-test was used to assess the significance between the two groups, while one-way ANOVA was used to analyze multiple groups. Statistical significance was established at p < 0.05. Graphical representations were created with Prism software, version 9.0 (GraphPad, San Diego, CA, USA). This research received approval from the Animal Care and Ethics Committee at the North Sichuan Medical College ( NSMC2022036 ). It complied with the Association for Research in Vision and Ophthalmology statement for using animals in ophthalmic and vision research. Guinea pigs aged three weeks weighing 120 g ~ 150 g were selected for the study. After examining the diopter of all guinea pigs, those with congenital myopia were excluded. A total of 45 guinea pigs were retained and randomly assigned to different groups. The blank control group without any treatment was the negative control (NC, n = 15). The experimental group had a white translucent mask covering their right eyes to induce form deprivation myopia (FDM, n = 15), and their left eyes were without intervention as the self-control (SC, n = 15) . The intervention group was treated with tert-butylhydroquinone (TBHQ, HY-100489, USA) (TBHQ, n = 15), which was dissolved in a mixture of normal saline with 10% DMSO and delivered into the animal by intraperitoneal injection (10 mg/kg) for 48 h intervals. All guinea pigs were housed in a 12 h light/12 h dark room for four weeks, and the temperature was maintained at 23 ± 2℃, and the lighting was 500 lx . After four weeks, all animals were humanely euthanized using excessive isoflurane (RWD Life Science Co., R510-22-10, China) inhalation followed by cervical dislocation. Subsequently, the eyeballs were removed for further experiments. The spherical equivalent (SE) and axial length (AL) were examined at five time points: before the covered mask and after being treated with different methods at one week, two weeks, three weeks, and four weeks. The Biological parameters were measured in a darkroom in the morning at 8:00 without the mask. The SE was used at least three times using a streak retinoscopy (66 Vision-Tech Co., China) by two experienced optometrists. The AL is defined as the distance from the anterior cornea center to the retina and was measured using an A-scan ultrasound image (Cinescan, France) by a proficient ophthalmic technician. The AL data were derived from the average of ten measurements for each eye. The mean values of SE and AL were calculated for analysis. Eyeballs were taken from each group and fixed in a 4% paraformaldehyde solution. Then, it is dehydrated with gradient alcohol and embedded in paraffin. Then, the paraffin-embedded tissues were sectioned into 4 μm thick slices, which were subsequently deparaffinized in dimethyl benzene and rehydrated in gradient alcohol. Antigen retrieval was performed using EDTA (pH 8.0, Biosharp, China) for 15 min under heat induction. Sections were incubated in a 3% hydrogen peroxide-methanol solution at 37 °C for 15 min and rinsed thrice with PBS. Blocking was carried out with 3% goat serum for 30 min, followed by overnight incubation at 4 °C with primary antibodies (anti-KEAP1, 1:100, Proteintech, China; anti-NRF2, 1:100, Huabio, China; anti-SOD1, 1:50, Omnimabs, Canada). After incubation, sections were washed three times with PBS and incubated with secondary antibodies (Boster, China) at room temperature for 1 h. Protein immune activity was detected using DAB chromogen (ZSGB-BIO, China). Sections were stained with hematoxylin, differentiated with hydrochloric alcohol, and dehydrated in gradient alcohol. Finally, the sections were mounted using neutral resin and examined for protein distribution under a microscope (Leica, France). Retinal samples were pre-prepared for RNA extraction using Trizol. The extraction followed the detailed steps outlined in the provided specifications kit (Takara, Japan). The extracted RNA was reverse transcribed into cDNA and then mixed with SYBR Green II (Takara, Japan) in the wells for amplification using the instrument (Light Cycler480, USA) for Real-time PCR. The amplification protocol was as follows: initial denaturation at 95 °C for 30s, followed by 40 cycles of denaturation at 95 °C for 5s, annealing at 60 °C for 30s, and extension at 97 °C for 1s. The relative mRNA levels of KEAP1, NRF2, and SOD were normalized to the internal control gene GAPDH and calculated using the 2 −ΔΔCT method. The sequences used in this study are shown in Table . Retinas were isolated from eyeballs and lysed in RIPA buffer (Beyotime Biotechnology, China) containing 1% PMSF (Beyotime Biotechnology, China). Protein extracts were obtained from the retinal supernatant, and their concentrations were determined using a BCA kit (Beyotime Biotechnology, China). The proteins were then mixed with 5× protein loading buffer (Solarbio, China), heated at 90℃ for 10 min, and subjected to 10% SDS-polyacrylamide gel electrophoresis and transferred the proteins to nitrocellulose membranes, which were then cut based on the locations of the target proteins. The NRF2 is located at 100kd, the KEAP1 is located at 70kd, the SOD1 is located at 26kd, and the GAPDH is located at 36kd. Next, the membranes were blocked with 5% nonfat milk at room temperature for 1 h and hybridization overnight at 4℃ with primary antibodies (anti-KEAP1, 1: 2000, Proteintech, China; anti-NRF2, 1: 1000, Huabio, China; anti-SOD1,1: 1000, WanleiBio, China; and anti-GAPDH, 1: 5000, Huabio, China). After incubation with mouse anti-rabbit secondary antibody (1: 10000; Boster, China) and incubated for one hour on a shaker, the membranes were developed using ECL developer solution (Biosharp, China) and imaged via a chemiluminescence system (Vilber Lourmat, France). Densitometry was quantified using Image J software. All data are presented as mean ± standard deviation (SD) and were analyzed using SPSS software, version 28.0. An unpaired Student’s t-test was used to assess the significance between the two groups, while one-way ANOVA was used to analyze multiple groups. Statistical significance was established at p < 0.05. Graphical representations were created with Prism software, version 9.0 (GraphPad, San Diego, CA, USA). Changes in SE and AL of guinea pigs in each group The baseline SE among all groups was moderate hyperopia with no statistically significant differences. By the second week, the SE differences among the groups had become statistically significant. After four weeks of treatment, the SE in both the NC and SC groups continued to show hyperopia, whereas it shifted to myopia in the FDM and TBHQ groups. However, the myopia degree in the TBHQ group was lower than in the FDM group. Initially, there were no significant differences in AL among the groups. By the second week, significant differences in AL emerged. Following treatment, the AL in both the FDM and TBHQ groups was longer compared to the NC and SC groups, with the TBHQ group exhibiting a shorter AL than the FDM group (Table ; Fig. ). Protein localization and expression in each group of guinea pigs After four weeks of FD, this study investigated the distribution of KEAP1, NRF2, and SOD1 to determine which part of the retina were affected by those genes. These proteins are predominantly localized in the retinal ganglion cells (RGC) (Fig. ). The mRNA expression in the retinas of guinea pigs The results indicated that KEAP1 mRNA expression was elevated in both the NC and SC groups but significantly decreased in the FDM group. Conversely, NRF2 expression was significantly higher in the FDM group compared to the lower levels observed in the NC and SC groups. Similarly, SOD expression decreased in the FDM group (Fig. A). This study investigated the impact of activators on mRNA by comparing the NC, FDM, and TBHQ groups. Progressive decrease in KEAP1 mRNA expression was noted across the NC, FDM, and TBHQ groups. In contrast, NRF2 expression gradually increased among these groups. SOD expression in the TBHQ group was lower than in the NC group but higher than in the FDM group (Fig. B). The protein expression in the retinas of guinea pigs After comparing the mRNA expression levels of KEAP1, NRF2, and SOD1, the protein levels of these genes were subsequently analyzed. The results revealed that the expression of KEAP1 and SOD1 was significantly elevated in both the NC and SC groups compared to the FDM group. Conversely, NRF2 expression was significantly higher in the FDM group than in the NC and SC groups (Fig. A-B). This study assessed the differential effects among the NC, FDM, and TBHQ groups to elucidate the activator’s impact on protein expression. KEAP1 expression was higher in the NC group than in the FDM and TBHQ groups. In contrast, NRF2 expression was lower in the NC group but was upregulated in the FDM group and significantly enhanced following TBHQ treatment. SOD1 showed higher expression in the NC group, which decreased in the FDM group and increased TBHQ treatment (Fig. C-D). The baseline SE among all groups was moderate hyperopia with no statistically significant differences. By the second week, the SE differences among the groups had become statistically significant. After four weeks of treatment, the SE in both the NC and SC groups continued to show hyperopia, whereas it shifted to myopia in the FDM and TBHQ groups. However, the myopia degree in the TBHQ group was lower than in the FDM group. Initially, there were no significant differences in AL among the groups. By the second week, significant differences in AL emerged. Following treatment, the AL in both the FDM and TBHQ groups was longer compared to the NC and SC groups, with the TBHQ group exhibiting a shorter AL than the FDM group (Table ; Fig. ). After four weeks of FD, this study investigated the distribution of KEAP1, NRF2, and SOD1 to determine which part of the retina were affected by those genes. These proteins are predominantly localized in the retinal ganglion cells (RGC) (Fig. ). The results indicated that KEAP1 mRNA expression was elevated in both the NC and SC groups but significantly decreased in the FDM group. Conversely, NRF2 expression was significantly higher in the FDM group compared to the lower levels observed in the NC and SC groups. Similarly, SOD expression decreased in the FDM group (Fig. A). This study investigated the impact of activators on mRNA by comparing the NC, FDM, and TBHQ groups. Progressive decrease in KEAP1 mRNA expression was noted across the NC, FDM, and TBHQ groups. In contrast, NRF2 expression gradually increased among these groups. SOD expression in the TBHQ group was lower than in the NC group but higher than in the FDM group (Fig. B). After comparing the mRNA expression levels of KEAP1, NRF2, and SOD1, the protein levels of these genes were subsequently analyzed. The results revealed that the expression of KEAP1 and SOD1 was significantly elevated in both the NC and SC groups compared to the FDM group. Conversely, NRF2 expression was significantly higher in the FDM group than in the NC and SC groups (Fig. A-B). This study assessed the differential effects among the NC, FDM, and TBHQ groups to elucidate the activator’s impact on protein expression. KEAP1 expression was higher in the NC group than in the FDM and TBHQ groups. In contrast, NRF2 expression was lower in the NC group but was upregulated in the FDM group and significantly enhanced following TBHQ treatment. SOD1 showed higher expression in the NC group, which decreased in the FDM group and increased TBHQ treatment (Fig. C-D). Recent studies have demonstrated that hypoxia-induced upregulation of HIF-1α enhances scleral agonist protein expression and promotes differentiation of scleral fibroblasts, ultimately contributing to myopia formation. Additionally, hypoxia increases matrix metalloproteinase levels by stimulating the secretion of scleral inflammatory factors, leading to significant degradation of collagen fibers, decreased scleral rigidity, and accelerated myopic progression . These findings underscore hypoxia as a pivotal factor in myopic development. Hypoxia results from an imbalance in oxygen metabolism; disruption in the balance between oxygen supply and consumption could induce OS by increasing the release of ROS and free radicals and by regulating HIF-1α transcription through the inhibition of prolyl hydroxylase, thus promoting excessive accumulation of HIF-1α and subsequent damage . Given that both HIF-1α and ROS are influenced by OS, these findings suggest that OS contributes to myopia. A recent study has found significant increases in ROS in the retinas of FDM animals , further confirming that OS was a important reason of myopia. Therefore, understanding these pathways in myopia could reveal additional targets for future research aimed at preventing or delaying myopia, thereby enhancing the effectiveness of myopia prevention and control strategies. As a crucial signaling pathway regulating OS, the KEAP1-NRF2 pathway could enhances the antioxidant capacity of the retina. It mitigates damage caused by OS, primarily through increasing SOD activity and the expression of other antioxidative genes . SOD was a crucial antioxidant, eliminates excess ROS and free radicals, thereby reducing apoptosis . Additionally, decreased SOD activity has been implicated in the development of myopia . Previous studies have identified the primary electrophilic structures of SOD1 as copper and zinc ions, while SOD2 involves manganese ion. They coexist in the retinas and regulate the OS. But deficiency in SOD1 was a major factor that induces the retinal antioxidant capacity decreased and leads to structural damage . Thus, this study examined the distribution of KEAP1, NRF2, and SOD1, the IHC revealing they are predominantly located in the retinal ganglion cell (RGC) layer of guinea pig retinas. RGCs as vital photoreceptors of the eyes, regulation through regulate ON, OFF signaling pathway influence myopic progression . Other studies found that the ipRGC participated in myopia development by regulating melanopsin . Indicating that different types of RGCs could affect the myopic progression. Moreover, RGCs have high oxygen consumption, are rich in polyunsaturated fatty acids and have a high mitochondrial density, making them extremely sensitive to changes in oxygen metabolism. Dysregulation in oxygen supply predisposes to lipid peroxidation, causing OS response and resulting retinal damage . These findings suggest that antioxidant genes in RGCs play a crucial role in myopia regulation. This study analyzed the changes of KEAP1 and NRF2 in the retinas of myopic animals, observing decreased KEAP1 expression and increased NRF2 expression in the FDM group. Due to the small sample size, this study doesn’t direct detection of ROS content changes, so only provides only indirect evidence of OS changes. Because under normal conditions, KEAP1 expression is higher and NRF2 expression is lower, it is even degraded in the cytoplasm. OS accelerated KEAP1-NRF2 dissociation and induced KEAP1 degradation in the cytoplasm, followed by NRF2 expression was increased and translocation to the nucleus, where it activates antioxidant genes to combat OS . Therefore, this study conducted a preliminary exploration of its role in myopia, and the results indirectly showed that OS might be one of the mechanisms regulating myopia development and that KEAP1-NRF2 participated in the progression of myopia. While previous studies confirmed that increased ROS levels due to factors such as hypoxia could accelerate KEAP1-NRF2 dissociation and enhance NRF2’s regulation of downstream elements like SOD, HO-1, TGF, and HIF-1α, which is involved in myopia progression , there were few studies revealed that KEAP1-NRF2 signaling pathways in myopia. Therefore, further studies elucidating the role of KEAP1-NRF2 signaling in myopia and its interactions with other known pathways are essential for a comprehensive understanding. Another study has found that the retinal SOD activity was decreased, combined with damaged retinal structure in myopic animals . As downstream of keap1-nrf2, OS induced by hypoxia leads to SOD1 deficiency and attenuates the retinal antioxidant capacity; this results in irreversible oxidative modifications to retinal proteins and lipids, subsequently impairing visual function by disrupting retinal structures, a change particularly notable in high myopia . The result showed that the SOD1 decreased in the retinal of the FDM group, further suggesting that the decreased retinal antioxidant capacity is related to myopia development. Still, their dynamic changes during form-deprivation myopia needed depth observation. TBHQ is an exogenous antioxidant compound that specifically activates NRF2 and promotes the expression of its downstream antioxidant gene SOD, thereby enhancing tissue antioxidant capacity . Treatment with TBHQ resulted in more significant changes in SE and AL compared to the NC group, but less so than in the FDM group. Molecular experiments demonstrated that TBHQ upregulated retinal NRF2 in FDM guinea pigs and mitigated myopia progression by increasing SOD1 expression. These results confirm that activating NRF2 and enhancing SOD1 expression can bolster retinal antioxidant capability and decelerate experimental myopic progression. Nonetheless, other second-phase antioxidant genes, such as HO-1 and NQO-1, are also regulated by the KEAP1-NRF2 signaling pathway, indicating that the antioxidant properties of SOD1 not unique . Moreover, earlier studies indicate that inflammatory mediators like NF-κB, TNF-α, and IL-6, which are regulated via the KEAP1-NRF2 pathway, contribute to myopia progression . This evidence suggests that the KEAP1-NRF2 mechanism is complex, and additional mechanisms still require exploration. Future research should isolate tissues from myopia models for proteomic analysis and further investigating potential interactions among several key pathways through protein interaction analysis and the Kyoto Encyclopedia of Genes and Genomes. In conclusion, this study is the first to identify changes in KEAP1-NRF2 in the FDM. Activation of KEAP1-NRF2 could promote downstream SOD1 expression and enhance retinal antioxidant capacity, offering a novel method to inhibit the progression of myopia. This study examined the distribution and expression of KEAP1, NRF2, and SOD1 in the retina. The results showed that they were distribute in RGC. The levels of KEAP1 and SOD1 were reduced in the FDM group compared to the NC and SC groups, while NRF2 expression was increased. Treatment with TBHQ upregulated NRF2, which subsequently elevated SOD1 expression and mitigated myopia progression. Overall, this study demonstrated the involvement of the KEAP1-NRF2 pathway in form-deprivation myopia progression, suggesting that the underlying mechanisms warrant further investigation. Supplementary Material 1. |
Rs205764 and rs547311 in linc00513 may influence treatment responses in multiple sclerosis patients: A pharmacogenomics Egyptian study | 8250bf15-b406-49d8-a2b6-11c42d19875c | 9985893 | Pharmacology[mh] | Introduction Multiple sclerosis (MS) is a disorder of the central nervous system (CNS), causing neurological disabilities in young adults. This complex and multifactorial disease affects more than 2.5 million people globally, with a higher prevalence in females compared to males . Establishing prevalence and estimates of MS in developing countries is yet to be made more feasible, primarily due to the lack of epidemiological studies around this disease. Treatmenst options of MS are aimed at 3 disciplines; the management of acute relapses, symptomatic treatment, and disease modifying treatment (DMT) . DMTs are drugs that are aimed at modulating immune responses. The primary goal of using DMTs is controlling and integrating clinical parameters such as relapses or disease progression, and magnetic resonance imaging (MRI) parameters such as the presence of new lesions. Together, both parameters are combined in a term called no evidence of disease activity (NEDA) . Despite the availability of well-established evidence on the clinical efficacy of these drugs, inconsistent treatment responses still prevail, providing a frequently insurmountable barrier against achieving adequate clinical outcomes and providing a better quality of life for these patients. Personalized therapies for MS are recently gaining a rightful interest, where the integration of parameters beyond MRI scans and disease state has a potential for contributing to better and more efficient treatment choices. Such parameters include accounting for differential epigenetic profiles in patients vs. healthy subjects, an emerging and promising area of research , as well as possibly accounting for genetic variances, or single nucleotide polymorphisms (SNPs), whose downstream effects may ultimately translate into affecting the response to treatment in patients who were typically suited for that given treatment . SNPs accounting for such discrepancies are not uncommon in MS. While accounting for these SNPs would certainly be pivotal in influencing the choice of DMT, a gap would still remain, since all SNPs previously associated with treatment responses were on protein coding elements . Indeed, the insurmountable epigenetic component of MS calls for the imminent bridging between the inconsistent treatment responses and SNPs on both coding and non-coding genetic elements, integrating both the epigenetic component of the disease as well as potential implications of genetic variations. The role of long non-coding RNAs (lncRNAs) is recently emerging in MS, owing to the high regulatory capacity of these elements in the disease pathogenesis . LncRNAs are non-coding species exceeding 200 nucleotides in length, and they can influence the differentiation of oligodendrocytes and the polarization state of macrophages, act as micro-RNA (miRNA) sponges, regulate the levels of immune-modulatory cytokines, as well as influence the activation state of CD4+ cells. It, therefore, comes as no surprise that SNPs occurring on such elements are expected to play important roles in the downstream activity of a given lncRNA, potentially extending to alterations in treatment responses among different patients. Long intergenic non-coding RNA (linc)00513 has been recently reported as a novel regulator of the type 1 interferon (IFN) signaling pathway . Polymorphisms in the promotor region of linc00513 (G for rs205764 and A for rs547311) have also been associated with an overexpression of linc00513 and a subsequent increase in the downstream signaling activity of the type 1 IFN pathway . In MS, no such variances have yet been investigated, and a corresponding role of linc00513 remains elusive. Given the pivotal role that the type 1 IFN signaling pathway plays in MS , investigating the implications of these genetic variations in MS patients seemed of great interest. We therefore aim to provide data on the distribution of genotypes at rs205764 and rs547311 in MS patients of the Egyptian population, and correlate these genotypes with the response to treatment. Other clinical parameters are also included, further asserting the clinical ramifications of these SNPs. Materials and methods 2.1 Study group This study included 144 RRMS patients (115 females and 29 males) with a clinical diagnosis of MS. Clinical parameters of the patients were assessed by the same neurologist at Nasser Institute Hospital MS Unit, Cairo, Egypt. Information was obtained regarding the patients’ response to treatment, which was defined as the lack of clinically documented attacks for at least one year on treatment . Additional information on the age of onset, EDSS, and the annualized relapse rate (ARR) – a parameter reflecting the number of relapses per year, were also obtained. All patients included were older than 18 years of age, diagnosed at the same MS center, and on a given medication for one year at the time of the study. Alternatively, their medical records were retrospectively checked, when applicable, for their status at one year of treatment in order to eliminate the effect of treatment duration on response. The age of onset was defined from the time of symptom onset not from the time of diagnosis, and the EDSS and ARR were assessed and calculated by the neurologist. First, with regards to the response to treatment, patients were considered responsive to a given medication if they experienced no relapses within the first year of treatment initiation. Alternatively, relapses occurring within the first few months of treatment were considered a positive predictor of treatment inefficacy in these particular patients , and they were therefore considered non-responsive to the given medication. All 144 RRMS patients were initially compared for differences in the frequency of responders among the genotype groups to highlight potential genotype-treatment response association. In subsequent subgrouping based on the treatment received, n = 48 patients receiving fingolimod, and n = 19 patients receiving DMF, were analyzed for the frequency of responders among the different genotype groups ; analysis of the response to treatment was done after one year of treatment initiation. For the EDSS, the scores of n = 108 (for rs205764) and n = 110 (for rs547311) RRMS patients were compared with regards to their genotypes to highlight potential genotype-EDSS association . All study participants signed an informed consent, and this study was approved from the ethics committees at the German University in Cairo and Nasser Institute Hospital, Cairo, Egypt. 2.2 Molecular research methodology 2.2.1 Genomic DNA isolation Genomic DNA was isolated from patients’ whole blood using QiAmp DNA extraction kit (Qiagen, USA) according to manufacturer’s protocol. DNA samples were stored at -20 C until downstream processing. DNA was quantified using Nanodrop. 2.2.2 Identification of the polymorphisms of interest Genotyping experiments were performed on StepOne Real Time Quantitative PCR (RT-qPCR) (Applied Biosystems, USA), using TaqMan reagents: TaqMan Genotyping Mastermix and TaqMan SNP Assays with their corresponding unique Assay IDs (rs205764: C:7614549_10; rs547311: C:2595518_10) (Life Technologies, USA). Fluorescence signals detected were VIC and FAM. Preparation of the reaction mixture: Each PCR tube contained a volume of DNA equivalent to at least 20 ng (manufacturer’s recommendation), nuclease-free water to 11.25 ul, 12.5 ul TaqMan Genotyping Mastermix, and finally, 1.25 ul TaqMan SNP Assay. The standard thermal profile was used . 2.3 Statistical analysis Statistical analysis was performed on GraphPad Prism v9.4 using parametric/nonparametric t-test/one-way ANOVA when comparing the age of onset, EDSS, and the ARR, and using Fisher exact when comparing the response to treatment. A p-value < 0.05 was considered as statistically significant. Values on the graphs are expressed as mean ± SEM. In order to determine the exact genotype that correlates to a significant difference in a given clinical parameter (inheritance of one VS two minor alleles), genotypes at the two polymorphisms were analyzed and compared with regards to four different models of inheritance. Initially, all samples were compared with regards to the dominant model of inheritance, where patients carrying the homozygous major genotype were compared to the rest of the patients. If significant differences were found in this model, additional models of inheritance were subsequently examined in order to identify if this difference was due to the inheritance of one or two minor alleles. In the recessive model of inheritance, patients who were homozygous for the minor allele were compared to the rest of the patients. In the overdominant model, patients who were heterozygous were compared to the rest of the patients. Finally, in the codominant model, all three genotypes were compared to each other . Study group This study included 144 RRMS patients (115 females and 29 males) with a clinical diagnosis of MS. Clinical parameters of the patients were assessed by the same neurologist at Nasser Institute Hospital MS Unit, Cairo, Egypt. Information was obtained regarding the patients’ response to treatment, which was defined as the lack of clinically documented attacks for at least one year on treatment . Additional information on the age of onset, EDSS, and the annualized relapse rate (ARR) – a parameter reflecting the number of relapses per year, were also obtained. All patients included were older than 18 years of age, diagnosed at the same MS center, and on a given medication for one year at the time of the study. Alternatively, their medical records were retrospectively checked, when applicable, for their status at one year of treatment in order to eliminate the effect of treatment duration on response. The age of onset was defined from the time of symptom onset not from the time of diagnosis, and the EDSS and ARR were assessed and calculated by the neurologist. First, with regards to the response to treatment, patients were considered responsive to a given medication if they experienced no relapses within the first year of treatment initiation. Alternatively, relapses occurring within the first few months of treatment were considered a positive predictor of treatment inefficacy in these particular patients , and they were therefore considered non-responsive to the given medication. All 144 RRMS patients were initially compared for differences in the frequency of responders among the genotype groups to highlight potential genotype-treatment response association. In subsequent subgrouping based on the treatment received, n = 48 patients receiving fingolimod, and n = 19 patients receiving DMF, were analyzed for the frequency of responders among the different genotype groups ; analysis of the response to treatment was done after one year of treatment initiation. For the EDSS, the scores of n = 108 (for rs205764) and n = 110 (for rs547311) RRMS patients were compared with regards to their genotypes to highlight potential genotype-EDSS association . All study participants signed an informed consent, and this study was approved from the ethics committees at the German University in Cairo and Nasser Institute Hospital, Cairo, Egypt. Molecular research methodology 2.2.1 Genomic DNA isolation Genomic DNA was isolated from patients’ whole blood using QiAmp DNA extraction kit (Qiagen, USA) according to manufacturer’s protocol. DNA samples were stored at -20 C until downstream processing. DNA was quantified using Nanodrop. 2.2.2 Identification of the polymorphisms of interest Genotyping experiments were performed on StepOne Real Time Quantitative PCR (RT-qPCR) (Applied Biosystems, USA), using TaqMan reagents: TaqMan Genotyping Mastermix and TaqMan SNP Assays with their corresponding unique Assay IDs (rs205764: C:7614549_10; rs547311: C:2595518_10) (Life Technologies, USA). Fluorescence signals detected were VIC and FAM. Preparation of the reaction mixture: Each PCR tube contained a volume of DNA equivalent to at least 20 ng (manufacturer’s recommendation), nuclease-free water to 11.25 ul, 12.5 ul TaqMan Genotyping Mastermix, and finally, 1.25 ul TaqMan SNP Assay. The standard thermal profile was used . Genomic DNA isolation Genomic DNA was isolated from patients’ whole blood using QiAmp DNA extraction kit (Qiagen, USA) according to manufacturer’s protocol. DNA samples were stored at -20 C until downstream processing. DNA was quantified using Nanodrop. Identification of the polymorphisms of interest Genotyping experiments were performed on StepOne Real Time Quantitative PCR (RT-qPCR) (Applied Biosystems, USA), using TaqMan reagents: TaqMan Genotyping Mastermix and TaqMan SNP Assays with their corresponding unique Assay IDs (rs205764: C:7614549_10; rs547311: C:2595518_10) (Life Technologies, USA). Fluorescence signals detected were VIC and FAM. Preparation of the reaction mixture: Each PCR tube contained a volume of DNA equivalent to at least 20 ng (manufacturer’s recommendation), nuclease-free water to 11.25 ul, 12.5 ul TaqMan Genotyping Mastermix, and finally, 1.25 ul TaqMan SNP Assay. The standard thermal profile was used . Statistical analysis Statistical analysis was performed on GraphPad Prism v9.4 using parametric/nonparametric t-test/one-way ANOVA when comparing the age of onset, EDSS, and the ARR, and using Fisher exact when comparing the response to treatment. A p-value < 0.05 was considered as statistically significant. Values on the graphs are expressed as mean ± SEM. In order to determine the exact genotype that correlates to a significant difference in a given clinical parameter (inheritance of one VS two minor alleles), genotypes at the two polymorphisms were analyzed and compared with regards to four different models of inheritance. Initially, all samples were compared with regards to the dominant model of inheritance, where patients carrying the homozygous major genotype were compared to the rest of the patients. If significant differences were found in this model, additional models of inheritance were subsequently examined in order to identify if this difference was due to the inheritance of one or two minor alleles. In the recessive model of inheritance, patients who were homozygous for the minor allele were compared to the rest of the patients. In the overdominant model, patients who were heterozygous were compared to the rest of the patients. Finally, in the codominant model, all three genotypes were compared to each other . Results 3.1 Patient characteristics This study group consisted of 79.7% females (n=115) and 20.2% males (n=29). The EDSS and the age of onset were not gender-dependent (p>0.05). Patient characteristics are summarized in . 3.2 Genotyping results The genotype distribution for both polymorphisms were as follows: For rs205764, 70 were homozygous for the allele T (48.6%), 12 were homozygous for the allele G (8.3%), and 62 were heterozygous (43%). For the investigated subset of MS population, T was considered as the major allele and G was considered as the minor allele according to the genotyping results. For rs547311, 76 were homozygous for the allele G (52.7%), 15 were homozygous for the minor A (10.4%), and 52 were heterozygous (36.11%). For the investigated subset of MS population, G was considered as the major allele and A was considered as the minor allele according to the genotyping results. Genotyping results and classification are summarized in . 3.3 Analyzing the response to treatment in different genotype groups for rs205764 and rs547311 The response to treatment was defined as the lack of clinically documented attacks for at least one year on treatment , as previously mentioned. No significant differences were found in the response to treatment in general, between patients carrying polymorphisms at either location and those who do not, either compared as a whole or sub-grouped by gender. When comparing the response of patients to specific DMTs , patients carrying polymorphisms at rs205764 in the dominant model (either one or two G alleles) showed a significantly higher response to fingolimod (p = 0.0362*) with an odds ratio (OR) of 4.72 compared to patients carrying two T alleles . These patients also showed a significantly lower response to DMF (p = 0.0436*) with a relative risk of 0.5 . Upon comparing these patients based on other models of inheritance, starting with the recessive, followed by the overdominant and the codominant, no significant difference was observed, suggesting that the difference in responses to fingolimod or DMF could equally be attributed to inheritance of either one or two G alleles at rs205764. Patients carrying polymorphisms at rs547311 showed no statistically significant differences in their response to fingolimod (p = 0.103), or DMF (p > 0.999). 3.4 Analyzing other clinical parameters in different genotype groups for rs205764 and rs547311 3.4.1 EDSS Patients’ EDSS were assessed by the same consulting neurologist at Nasser Institute Hospital. When comparing the average EDSS of patients carrying different alleles at both positions , patients carrying one or two A alleles at rs547311 showed a significantly higher EDSS (p = 0.0419*) compared to patients carrying two G alleles. Upon comparing these patients based on other models of inheritance, no significant difference was observed, suggesting, again, that the inheritance of one or two A alleles at rs547311 may be equally detrimental for a patient’s EDSS. Different alleles at rs205764, on the other hand, showed no significant association with the patients’ EDSS . 3.4.2 Age of onset The patients’ age of onset was defined from the reported time of onset of symptoms and not the time of diagnosis. The average age of onset of different patient genotype groups were compared for the two polymorphic locations . When comparing the average age of onset between patients carrying one or two G alleles at rs205764, no significant difference was observed (p = 0.7098). This was also the case when comparing patients carrying polymorphisms at rs205764 only (i.e. carrying the major G allele at rs547311) (p = 0.8934). The opposite was also true for rs547311. Additionally, when comparing the average age of onset for patients carrying a polymorphism at either location exclusively without the other , a trend could be seen, yet the difference did not reach significance (p = 0.3683). These results are summarized in . 3.4.3 ARR The patients’ ARR was calculated by the neurologist and compared across the different genotypes in the dominant model with regards to the two polymorphisms. Although non-responders, by definition, experience more relapses than responders, and should be expected to have a higher ARR, no significant difference in the ARR between genotypes at either location was found . This is likely attributed to the fact that with the exception of fingolimod and DMF, there were no significant differences among the different genotypes with regards to the response to MS treatment in this study. However, upon comparing the ARR between genotypes within a given treatment (for both fingolimod and DMF – ), the lack of significant differences persists, presenting the usefulness of assessing treatment responses in terms of more than one analysis in this study. Moreover, the effect of patient genotype on ARR may be better assessed through measuring differential changes in ARR before and after treatment for each genotype. 3.5 Correlation between age and the analyzed parameters In order to ascertain that the analyzed parameters are not influenced by age in our studied patient cohort, a correlation was done between age and each of the EDSS, response to fingolimod, response to DMF, as well as the ARR. Correlations between age and each of EDSS and ARR was done using Pearson correlation, and with the response to treatment using Point-Biserial correlation test. None of the correlations with age appeared to be major nor significant, suggesting that in our cohort, age did not influence any of these parameters. These results are summarized in . Patient characteristics This study group consisted of 79.7% females (n=115) and 20.2% males (n=29). The EDSS and the age of onset were not gender-dependent (p>0.05). Patient characteristics are summarized in . Genotyping results The genotype distribution for both polymorphisms were as follows: For rs205764, 70 were homozygous for the allele T (48.6%), 12 were homozygous for the allele G (8.3%), and 62 were heterozygous (43%). For the investigated subset of MS population, T was considered as the major allele and G was considered as the minor allele according to the genotyping results. For rs547311, 76 were homozygous for the allele G (52.7%), 15 were homozygous for the minor A (10.4%), and 52 were heterozygous (36.11%). For the investigated subset of MS population, G was considered as the major allele and A was considered as the minor allele according to the genotyping results. Genotyping results and classification are summarized in . Analyzing the response to treatment in different genotype groups for rs205764 and rs547311 The response to treatment was defined as the lack of clinically documented attacks for at least one year on treatment , as previously mentioned. No significant differences were found in the response to treatment in general, between patients carrying polymorphisms at either location and those who do not, either compared as a whole or sub-grouped by gender. When comparing the response of patients to specific DMTs , patients carrying polymorphisms at rs205764 in the dominant model (either one or two G alleles) showed a significantly higher response to fingolimod (p = 0.0362*) with an odds ratio (OR) of 4.72 compared to patients carrying two T alleles . These patients also showed a significantly lower response to DMF (p = 0.0436*) with a relative risk of 0.5 . Upon comparing these patients based on other models of inheritance, starting with the recessive, followed by the overdominant and the codominant, no significant difference was observed, suggesting that the difference in responses to fingolimod or DMF could equally be attributed to inheritance of either one or two G alleles at rs205764. Patients carrying polymorphisms at rs547311 showed no statistically significant differences in their response to fingolimod (p = 0.103), or DMF (p > 0.999). Analyzing other clinical parameters in different genotype groups for rs205764 and rs547311 3.4.1 EDSS Patients’ EDSS were assessed by the same consulting neurologist at Nasser Institute Hospital. When comparing the average EDSS of patients carrying different alleles at both positions , patients carrying one or two A alleles at rs547311 showed a significantly higher EDSS (p = 0.0419*) compared to patients carrying two G alleles. Upon comparing these patients based on other models of inheritance, no significant difference was observed, suggesting, again, that the inheritance of one or two A alleles at rs547311 may be equally detrimental for a patient’s EDSS. Different alleles at rs205764, on the other hand, showed no significant association with the patients’ EDSS . 3.4.2 Age of onset The patients’ age of onset was defined from the reported time of onset of symptoms and not the time of diagnosis. The average age of onset of different patient genotype groups were compared for the two polymorphic locations . When comparing the average age of onset between patients carrying one or two G alleles at rs205764, no significant difference was observed (p = 0.7098). This was also the case when comparing patients carrying polymorphisms at rs205764 only (i.e. carrying the major G allele at rs547311) (p = 0.8934). The opposite was also true for rs547311. Additionally, when comparing the average age of onset for patients carrying a polymorphism at either location exclusively without the other , a trend could be seen, yet the difference did not reach significance (p = 0.3683). These results are summarized in . 3.4.3 ARR The patients’ ARR was calculated by the neurologist and compared across the different genotypes in the dominant model with regards to the two polymorphisms. Although non-responders, by definition, experience more relapses than responders, and should be expected to have a higher ARR, no significant difference in the ARR between genotypes at either location was found . This is likely attributed to the fact that with the exception of fingolimod and DMF, there were no significant differences among the different genotypes with regards to the response to MS treatment in this study. However, upon comparing the ARR between genotypes within a given treatment (for both fingolimod and DMF – ), the lack of significant differences persists, presenting the usefulness of assessing treatment responses in terms of more than one analysis in this study. Moreover, the effect of patient genotype on ARR may be better assessed through measuring differential changes in ARR before and after treatment for each genotype. EDSS Patients’ EDSS were assessed by the same consulting neurologist at Nasser Institute Hospital. When comparing the average EDSS of patients carrying different alleles at both positions , patients carrying one or two A alleles at rs547311 showed a significantly higher EDSS (p = 0.0419*) compared to patients carrying two G alleles. Upon comparing these patients based on other models of inheritance, no significant difference was observed, suggesting, again, that the inheritance of one or two A alleles at rs547311 may be equally detrimental for a patient’s EDSS. Different alleles at rs205764, on the other hand, showed no significant association with the patients’ EDSS . Age of onset The patients’ age of onset was defined from the reported time of onset of symptoms and not the time of diagnosis. The average age of onset of different patient genotype groups were compared for the two polymorphic locations . When comparing the average age of onset between patients carrying one or two G alleles at rs205764, no significant difference was observed (p = 0.7098). This was also the case when comparing patients carrying polymorphisms at rs205764 only (i.e. carrying the major G allele at rs547311) (p = 0.8934). The opposite was also true for rs547311. Additionally, when comparing the average age of onset for patients carrying a polymorphism at either location exclusively without the other , a trend could be seen, yet the difference did not reach significance (p = 0.3683). These results are summarized in . ARR The patients’ ARR was calculated by the neurologist and compared across the different genotypes in the dominant model with regards to the two polymorphisms. Although non-responders, by definition, experience more relapses than responders, and should be expected to have a higher ARR, no significant difference in the ARR between genotypes at either location was found . This is likely attributed to the fact that with the exception of fingolimod and DMF, there were no significant differences among the different genotypes with regards to the response to MS treatment in this study. However, upon comparing the ARR between genotypes within a given treatment (for both fingolimod and DMF – ), the lack of significant differences persists, presenting the usefulness of assessing treatment responses in terms of more than one analysis in this study. Moreover, the effect of patient genotype on ARR may be better assessed through measuring differential changes in ARR before and after treatment for each genotype. Correlation between age and the analyzed parameters In order to ascertain that the analyzed parameters are not influenced by age in our studied patient cohort, a correlation was done between age and each of the EDSS, response to fingolimod, response to DMF, as well as the ARR. Correlations between age and each of EDSS and ARR was done using Pearson correlation, and with the response to treatment using Point-Biserial correlation test. None of the correlations with age appeared to be major nor significant, suggesting that in our cohort, age did not influence any of these parameters. These results are summarized in . Discussion Multiple sclerosis is a complex, multifactorial, immune-mediated disease targeting the CNS, causing focal lesions of demyelination, impairing nerve conduction and signal transmission . Treatment strategies of the disease are generally aimed at 3 directions, of particular controversy and importance is the use of drugs that help modulate immune responses, called DMTs, a few examples of which are IFN-β, fingolimod, glatiramer acetate, and dimethyl fumarate . Epigenetic research has garnered rightful interest in its contribution to understanding disease pathology , susceptibility, and development . Several areas of research have recently taken interest in the roles of lncRNAs in immune-mediated diseases in general, and MS in particular, in light of the pre-established epigenetic changes that are observed in the disease pathology . LncRNAs have numerous well-established genetic and epigenetic regulatory roles. Of particular interest in this frame of work is linc00513, since its dysregulation has yet been investigated in a single study conducted on systemic lupus erythematosus (SLE) patients and none is yet known about its functional role in MS. Its overexpression has been shown to positively relate to the activity of the type-1 IFN signaling pathway, contributing to the inflammatory state in SLE patients . Linc00513 has been identified as a risk allele for SLE in the aforementioned study, yet no such correlation has been made with MS as per the most recent MS genetic map . However, due to the previously well-established protective role of the same signaling pathway in MS, drawing a straightforward prediction regarding the population under investigation was not entirely possible, making it all the more intriguing to investigate its correlation to MS disease. Single nucleotide polymorphisms (SNP)s are genetic variations involving a single base-pair. Ample research is available on SNPs involved in the development of MS; however, very little amount of research has yet taken interest in SNPs located on non-coding genetic elements, and the potential influence this may have on downstream regulatory processes, and ultimately the clinical picture of the patients. Regarding linc00513, a study has shown that G allele at rs205764 and A allele at rs547311, located in its promotor region, positively correlate to its expression levels and the subsequent signaling activity of the type-1 IFN pathway . Taking it from there, the aim of this work was to determine the genetic prevalence of rs205764 and rs547311 in MS patients of the Egyptian population, and correlate these genetic variances to several clinical parameters, the primary focus of which was the response to treatment. Blood samples were collected from 144 patients, from which genomic DNA was isolated and analyzed for the genotypes at the positions of interest on linc00513 using RT-qPCR. These polymorphisms were then correlated with the previously obtained clinical parameters of each patient: the response to treatment, onset age, and EDSS. The genotypes were analyzed and compared with regards to 4 different models of inheritance: dominant, recessive, overdominant, and codominant. When analyzing the relationship between these polymorphisms and the patient’s response to treatment, a significant difference was found regarding patients carrying polymorphisms at rs205764, where they showed a significantly higher response to fingolimod compared to patients carrying the major allele, with an OR of 4.7. These patients also showed a significantly lower response to DMF, with an OR of 0.5. When examining additional models of inheritance, no significant differences were found, suggesting that inheritance of either one or two G alleles is equally associated with a difference in the treatment response. For the same variants, there are no reported associations, to date, with the response to treatment in MS or any other autoimmune disease. However, other variants have been studied in the context of response to DMF and fingolimod. Rs6919626, in NADPH oxidase-3 gene, has been significantly associated with a lower response to DMF, but no significant associations have been found with the response to fingolimod yet . For the two remaining clinical parameters, no significant difference was seen in the average age of onset between patients carrying either polymorphism and those who do not. These variants have also not yet been previously associated with the age of onset of MS or any other autoimmune diseases; however, rs10492503 in Glypican-5 gene has previously been significantly associated with an earlier age of onset in male MS patients . In our study, patients carrying polymorphisms at rs547311 showed a significantly higher disability score compared to patients carrying the major allele. No significant differences were seen in the other models of inheritance, suggesting that a single or double A alleles are equally detrimental for a patient’s EDSS. Finally, polymorphisms at rs205764 appear to have no association with the EDSS. These finding appear to be partially consistent, in terms of patient disability, with the study reporting rs205764 and rs547311 as novel regulators of IFN signaling , where the resulting overexpression of linc00513 has been associated with a higher IFN score for SLE patients. Moreover, several other variants have previously been associated with differences in EDSS for MS patients, including rs17445836 in interferon regulatory factor-8 gene , rs3087456 and rs4774 in class-II trans-activator gene , rs1049269 in transferrin gene , and rs1494555 in interleukin-7 receptor gene . Through this work, we intended to assert the relevance of genetic polymorphisms in the clinical course of a complex disease like MS. However, some limitations that ought to be acknowledged in this study include the small number of patients in some of the comparisons, and the lack of available data when it comes to certain clinical parameters; this includes MRI data, in which case, hindering better monitoring of the disease clinical course as well as accounting for sub-clinical disease activity, as well as patient ARR before treatment initiation, which would have been substantially beneficial in assessing the differential treatment efficacies among the different genotypes from a relapse-incidence perspective, further corroborating the significant differences between the number of responders and non-responders found in some of the treatment groups. The allocation of the correct patients to the correct treatment regimens is the ultimate goal in the context of any healthcare specialization. The development of tools, however preliminary, that aid in accomplishing this goal should be regarded with utmost priority. Establishing reliable biomarkers or screening methods for treatment stratification of MS patients is the first stepping stone towards achieving truly personalized MS therapy. This could potentially be achieved through exploring the possibility of constructing a gene panel consisting of all SNPs that are implicated in the inconsistent treatment responses among MS patients, and potentially using it as a guide to direct physicians towards more effective treatment choices, maximizing patient benefits and minimizing the exposure to unnecessary therapies, and possibly untying one of the knots contributing to the complexity of this multifactorial disease. The original contributions presented in the study are included in the article/supplementary materials. Further inquiries can be directed to the corresponding author. The studies involving human participants were reviewed and approved by German university in cairo ethics committee and Nasser Institute hospital ethics committee. The patients/participants provided their written informed consent to participate in this study. HE designed the research framework and methodology. NA carried out sample collection, DNA isolation and genotyping, statistical analysis and manuscript writing. ME-A contributed to sample collection and DNA isolation. RR and MH were the neurologists who provided crucial clinical data including EDSS among other parameters, and also assisted in the ethical approval of this study. All authors contributed to the article and approved the submitted version. |
A Model of Social Media Effects in Public Health Communication Campaigns: Systematic Review | 275acd5d-b01c-4909-baa5-82f04baf916b | 10382952 | Health Communication[mh] | Health communication campaigns usually operate on the basic assumption that the audience response will follow a sequential and linear series of steps from exposure to the campaign to an action or behavior . This process is commonly known as the hierarchy of effects model (HOE). According to HOE, exposure to a campaign message will lead to an action or change of behavior through some intermediate steps, such as change in attitudes or beliefs . Further, the probability of achieving each outcome is theorized to decrease as the process moves through the hierarchy, meaning that, ultimately, only a small proportion of the audience exposed to a campaign will engage in the desired behavior. However, empirical evidence demonstrating the underlying assumptions of HOE is limited, especially in relation to public health campaigns . Additionally, HOE is not without its critics. Hornik and Yanovitzky , for example, have argued that its basic assumption, that exposure leads directly to behavior change, does not capture the full effects of a campaign. They argue that indirect influences on behavior change, such as through news media pressure or policy change, must be considered. Outside of public health, commercial marketers have also debated the merits of HOE, with some contending that it has been used largely because individual constructs, such as awareness of a campaign, are easily measured rather than because it is a valid model . Others have countered that this simply means more testing is needed, and not that the model is fundamentally flawed . Despite these criticisms, there is agreement that understanding how communication campaigns influence behavior is essential for effective campaign design and rigorous evaluation. Having a model like HOE allows campaign planners to consider the steps within the hierarchy and target and tailor their messages accordingly. It also allows evaluators to identify the necessary measures by which to judge a campaign’s effectiveness. In short, having a clearly defined expectation for how a campaign will work is useful for both planning and evaluation. Much of this debate around HOE relates to campaigns using “old” media, such as television. However, over the past 2 decades, social media have become increasingly important and commonly used for health communication . “Old” media channels may be characterized as “transactional” in that they assume that the communication process between campaigner and audience is linear, sequential, and based on 1-way communication of messages. Thus, these channels align with the HOE model. Social media, however, is inherently communal, allowing for multidirectional and wider sharing of messages, meaning that social media may be characterized as “interactional.” Instead of campaign awareness, practitioners are now seeking “engagement” from their audiences. Broadly speaking, engagement includes any features or functions of social media that allow users to interact, share, and create content with their social networks . These features include direct dialogue between audiences and campaigners, as well as audiences sharing the campaign messages with their social networks, either directly through powerful word-of-mouth marketing or indirectly through the algorithms social media platforms use to determine what content to feed a given user. This dialogue and amplification may be generated in a more effective manner than in “old” media channels, or at least in a more measurable manner . Further, there are risks that come with social media use that may result in undesirable or even harmful effects, such as the spreading of misinformation and facilitation of stigma and abuse . The traditional HOE model does not account for these interactions and effects. The development of social media campaign practice has outpaced that of theoretical and conceptual development. Communications theories do exist and can be useful in understanding how messages are disseminated and why people engage with social media. The One-step, Two-step, and Multi-Step Flow theories, for example, can be useful in conceptualizing the relationship between social media users and mass media , while the Uses and Gratification Theory helps in understanding why social media users seek out the information that they do . Behavior change theories, like the Health Belief Model and Diffusion of Innovations , are also useful in identifying the factors that influence behavior change that campaign messages can target. However, communications and behavior change theories are not sufficient when it comes to understanding the effects of social media campaigns and the assumptions that underpin campaigns. The HOE model may be able to fill this gap but given the vastly different nature of social media compared to traditional media channels, it must be reconsidered and perhaps updated or even replaced entirely. Key questions relating to campaign practice remain unanswered, especially in relation to the impact on health-related outcomes. This includes an assessment of the value of engagement in achieving campaign goals. In commercial marketing, there is limited evidence that engagement is associated with increased purchase intentions, income, and sales , but within health communication, it remains unclear as to what role engagement plays, if any, or whether different types of engagement are better than others and in what contexts. As mentioned above, audience engagement with a social media campaign has been framed as desirable , but it remains unclear what its value is. This makes it difficult for campaigners and evaluators to develop and evaluate campaigns. Amidst calls for new methods and research into digital communications , we need to consider the theory of how social media “works” in health communication campaigns, including the position and role of engagement. Failure to do so risks wasting resources and, worse, campaigns having a negative impact on health outcomes. In this systematic review, we aimed to update the traditional HOE for health communication campaigns in the context of social media. Specifically, we asked: What indicators are used to evaluate the effectiveness of health-related social media campaigns? How are these indicators conceptualized to lead to health-related outcomes? We were not seeking to quantify the magnitude of effects of social media on health-related outcomes or test the pathway. Instead, we hope to inform further practice-relevant research and evaluations of the use of social media in health communication. Overview We undertook a systematic review of studies following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines (see ) reporting on the use of social media for health communication purposes (PROSPERO Registration: CRD42021287257). We searched 5 electronic databases (CINAHL, Scopus, MEDLINE, PsycInfo, and Web of Science) from 2007 until November 22, 2022. These databases were selected because they are the predominant databases for health-related research. The search strategies are shown in . Eligibility and Screening To be eligible, studies needed to be descriptions or evaluations of public health marketing campaigns that used any social media, including as part of a wider mass media or social marketing campaign, and published from 2007 onward. This year was selected because it was the first complete year in which Facebook was available to the global public. Social media metrics had to be reported separately from any other channels. “Social media” included any digital platform that enables or facilitates the creation of web-based communities for the purpose of sharing information, opinions, and content (eg, Facebook, Twitter, Instagram, WeChat, and YouTube), including purpose-built platforms. A “campaign” was defined as any sustained, deliberate effort to communicate a message or group of related messages that aim to inform, motivate, or persuade a nonclinical population. This included one-off or repeat campaigns that are continuous or episodic. We used English search terms but did not restrict eligibility by language. There were no restrictions on health issues, study design, or evaluation indicators. Studies were excluded if they were commentaries, dissertations, or conference abstracts. Reviews were also excluded because they report on multiple campaigns collectively, potentially obscuring different conceptual pathways of effects. We also excluded studies of exposure to commercial marketing on social media, clinical interventions or health programs that used social media as a setting or delivery mechanism, campaigns that targeted nonhealth issues, studies of user experience on social media, and papers that reported only formative research for social media campaigns. After removing duplicates, 1 author screened abstract and titles for eligibility. Two authors then independently screened the full text for the retained studies, with discrepancies resolved by discussion. The agreement between reviewers was 74% (Cohen coefficient=0.42). Data Extraction We developed and pilot-tested data extraction forms, extracting all information described in . This included key campaign information, such as goals and objectives, social media platforms used, and theories and frameworks used. Goals and objectives were classified as targeting either awareness-raising, individual behavior change, or social change. Awareness-raising campaigns sought to increase awareness of a health issue (eg, mental health stigma) without explicitly aiming to change behavior. Individual behavior change campaigns aimed to change health-related behaviors of individuals in a population (eg, reducing alcohol consumption), while social change campaigns aimed to build support for wider social change (eg, adoption of supportive breastfeeding policies in workplaces). Similarly, we classified theories as individual-level, interpersonal-level, and community-level. Individual-level theories conceptualized behavior within an individual (eg, health belief model and the transtheoretical model of behavior change). Interpersonal-level theories conceptualized how social factors, such as social norms and interactions, interacted with individual-level factors and influenced the behavior of individuals (eg, dynamic transactional model and social cognitive theory). Community-level theories conceptualized how information, norms, and behaviors were transferred across and through groups (eg, diffusion of innovations and Two-step flow model of communication). Bespoke models, such as campaign-specific logic models, and frameworks for practice, such as social marketing, were also noted, but we did not classify these. Campaigns could have objectives or use theories that fit into more than one of our categories. We extracted the reported measures or indicators of campaign performance or effectiveness, grouping them according to a conceptual framework based on the HOE that was developed by Chan et al . This framework included 6 steps, shown in , that were used to group measures or indicators collected in the evaluations. Data were tabulated by campaign to facilitate comparison between campaigns, as opposed to studies. Two authors completed the extraction process. Data from a subset (10%) of the included studies were extracted by both authors to test for shared understanding of the data fields, with discrepancies resolved through discussion and the extraction forms amended as appropriate. Data from the remaining studies were extracted independently. Analysis and Development of Conceptual Pathway of Effects We analyzed the extracted data narratively. Specifically, we considered the nature of the campaign goals, objectives, theories, and frameworks (where used), and the evaluation indicators collected and reported and the relationship between these. This included examining who and what the campaigns targeted (eg, individual behavior change and social change) and whether the reported measures aligned with the stated goal. Similarly, we compared the reported measures with the concepts of the theories and frameworks that underpinned the campaigns (whether a campaign that used the health belief model collected measures of perceived susceptibility, self-efficacy, etc). This analysis was used to develop an initial conceptual model of social media effects by mapping the constructs that underpinned the campaigns, whether these were made explicit or implied. This initial model was reviewed as a team and built iteratively through discussion, with reference back to the extracted data as needed. We undertook a systematic review of studies following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines (see ) reporting on the use of social media for health communication purposes (PROSPERO Registration: CRD42021287257). We searched 5 electronic databases (CINAHL, Scopus, MEDLINE, PsycInfo, and Web of Science) from 2007 until November 22, 2022. These databases were selected because they are the predominant databases for health-related research. The search strategies are shown in . To be eligible, studies needed to be descriptions or evaluations of public health marketing campaigns that used any social media, including as part of a wider mass media or social marketing campaign, and published from 2007 onward. This year was selected because it was the first complete year in which Facebook was available to the global public. Social media metrics had to be reported separately from any other channels. “Social media” included any digital platform that enables or facilitates the creation of web-based communities for the purpose of sharing information, opinions, and content (eg, Facebook, Twitter, Instagram, WeChat, and YouTube), including purpose-built platforms. A “campaign” was defined as any sustained, deliberate effort to communicate a message or group of related messages that aim to inform, motivate, or persuade a nonclinical population. This included one-off or repeat campaigns that are continuous or episodic. We used English search terms but did not restrict eligibility by language. There were no restrictions on health issues, study design, or evaluation indicators. Studies were excluded if they were commentaries, dissertations, or conference abstracts. Reviews were also excluded because they report on multiple campaigns collectively, potentially obscuring different conceptual pathways of effects. We also excluded studies of exposure to commercial marketing on social media, clinical interventions or health programs that used social media as a setting or delivery mechanism, campaigns that targeted nonhealth issues, studies of user experience on social media, and papers that reported only formative research for social media campaigns. After removing duplicates, 1 author screened abstract and titles for eligibility. Two authors then independently screened the full text for the retained studies, with discrepancies resolved by discussion. The agreement between reviewers was 74% (Cohen coefficient=0.42). We developed and pilot-tested data extraction forms, extracting all information described in . This included key campaign information, such as goals and objectives, social media platforms used, and theories and frameworks used. Goals and objectives were classified as targeting either awareness-raising, individual behavior change, or social change. Awareness-raising campaigns sought to increase awareness of a health issue (eg, mental health stigma) without explicitly aiming to change behavior. Individual behavior change campaigns aimed to change health-related behaviors of individuals in a population (eg, reducing alcohol consumption), while social change campaigns aimed to build support for wider social change (eg, adoption of supportive breastfeeding policies in workplaces). Similarly, we classified theories as individual-level, interpersonal-level, and community-level. Individual-level theories conceptualized behavior within an individual (eg, health belief model and the transtheoretical model of behavior change). Interpersonal-level theories conceptualized how social factors, such as social norms and interactions, interacted with individual-level factors and influenced the behavior of individuals (eg, dynamic transactional model and social cognitive theory). Community-level theories conceptualized how information, norms, and behaviors were transferred across and through groups (eg, diffusion of innovations and Two-step flow model of communication). Bespoke models, such as campaign-specific logic models, and frameworks for practice, such as social marketing, were also noted, but we did not classify these. Campaigns could have objectives or use theories that fit into more than one of our categories. We extracted the reported measures or indicators of campaign performance or effectiveness, grouping them according to a conceptual framework based on the HOE that was developed by Chan et al . This framework included 6 steps, shown in , that were used to group measures or indicators collected in the evaluations. Data were tabulated by campaign to facilitate comparison between campaigns, as opposed to studies. Two authors completed the extraction process. Data from a subset (10%) of the included studies were extracted by both authors to test for shared understanding of the data fields, with discrepancies resolved through discussion and the extraction forms amended as appropriate. Data from the remaining studies were extracted independently. We analyzed the extracted data narratively. Specifically, we considered the nature of the campaign goals, objectives, theories, and frameworks (where used), and the evaluation indicators collected and reported and the relationship between these. This included examining who and what the campaigns targeted (eg, individual behavior change and social change) and whether the reported measures aligned with the stated goal. Similarly, we compared the reported measures with the concepts of the theories and frameworks that underpinned the campaigns (whether a campaign that used the health belief model collected measures of perceived susceptibility, self-efficacy, etc). This analysis was used to develop an initial conceptual model of social media effects by mapping the constructs that underpinned the campaigns, whether these were made explicit or implied. This initial model was reviewed as a team and built iteratively through discussion, with reference back to the extracted data as needed. Characteristics of Included Campaigns From the 11,235 studies initially identified, we included 99 studies. These studies were published between 2012 and 2022 and related to 93 campaigns . Of these, 47 were social media only , 9 were digital only , 24 were mass media campaigns , and 13 were social marketing campaigns . Most campaigns were conducted in the United States (n=42), 8 from each of Australia and Canada, 7 were from the United Kingdom, 4 from China, 3 from Italy, 2 from each of Aotearoa New Zealand and Qatar, and 1 from each of Belgium, Brazil, Chile, Denmark, Ghana, India, Indonesia, Malaysia, the Netherlands, Puerto Rico, Saudi Arabia, Vietnam, and Wales. Four were multicountry campaigns. Health issues targeted by the campaigns included smoking or vaping (n=14), sexual health or HIV (n=9), mental health (n=9), cervical cancer screening or HPV vaccination (n=7), COVID-19 (n=7), nutrition and eating disorders (n=7), overweight and obesity (n=5), Alzheimer disease or dementia (n=3), influenza vaccination (n=3), reproductive or antenatal health (n=2), physical activity (n=2), road safety (n=2), skin cancer prevention (n=3), breast cancer screening (n=2), and hepatitis (n=2), while 6 targeted multiple risk factors for chronic diseases. Other issues targeted by single campaigns were alcohol, bowel cancer screening, antibiotic use, type 2 diabetes, injury prevention, family planning, water safety, autism, and the Zika virus. The campaign duration varied from 1 week to 6 years. The most popular platform used in the campaigns was Facebook (n=70), followed by Twitter (n=40), Instagram (n=27), YouTube (n=23), Snapchat (n=3), and WeChat, Sina Weibo, LinkedIn, and Tumblr (n=2 each). Other platforms used by 1 campaign included TikTok, Pinterest, MySpace, Grindr, Squirt, LINE, Zalo, and Flickr, as well as the now-defunct Vine. One campaign created its own social media platform, and 4 did not specify what platforms they used. Approximately half of the campaigns used just 1 platform (n=44) and half used more than one (n=45). Audience interaction was explicitly sought in just 24 campaigns, with the most common technique being to seek user-generated content, such as videos and photos (n=9). Other techniques used included contests, pledges, hashtags, prompting users to like and share content, and crowdsourcing the distribution of campaign material. Almost all (n=81) campaigns set objectives related to awareness raising and individual behavior change. Just 5 campaigns set objectives aimed at social change, while the objectives of 9 campaigns were unclear. With regard to theories and frameworks, 41 campaigns made explicit use of at least one theory or framework in their design or evaluation. The specific theories used varied significantly, but we classified 16 as being individual level, 8 as interpersonal level, and 9 as community level. A total of 14 campaigns used bespoke models or frameworks for practice. No campaign explicitly used the traditional HOE. Measures Collected and Comparisons to Objectives and Underlying Theories and Frameworks We found that 68 campaigns had reported process measures (relating to reach, impressions, counts of posts, etc), 24 awareness measures, 73 engagement measures, 30 priming steps measures, 16 behavioral trialing measures, and 25 outcome evaluation measures. No campaigns had measures that spanned all 6 steps in the Chan et al model, while 5 measured 5 of the steps, 13 measured 4 steps, 19 measured 3 steps, and 45 measured 2 steps. Eleven campaigns measured only 1 step. Just under half of the campaigns (n=42) did not measure anything beyond the engagement step, while only 8 campaigns reported measures from priming steps, behavioral trialing, and outcome evaluation only and did not measure process, awareness, or engagement. Social media–only campaigns measured fewer steps on average compared to the other campaign types (2.2 compared to 3.3 for digital-only campaigns, 2.6 for mass media campaigns, and 2.9 for social marketing campaigns). The most common process measures were views or related measures (n=29), reach (n=29), impressions (n=23), and count of posts or tweets (n=10). The most common engagement measures were likes or reactions (n=43), shares or retweets (n=40), comments (n=36), clicks or click-through rate (n=27), and number of followers or fans (n=23). Most campaigns (n=55) did not collect measures that allowed them to determine whether the campaign objective had been met; that is, they were process evaluations only. Similarly, of the 27 campaigns that used an explicit theory or framework, only 10 collected measures that aligned with the theory or framework. For example, 6 campaigns used the Health Belief Model; while the Health Belief Model posits that there are necessary precursor steps toward behavior change such as self-efficacy and perceived susceptibility, only 2 of these campaigns reported measuring these concepts in their evaluation. A Model of Social Media Campaigns Using our review findings, we developed a model of effects for campaigns using social media . The model is based on a few key observations, combined with our understanding of the traditional HOE model and social media. First, given that most campaigns set individual behavior change objectives and individual-level theories and frameworks were the most common, there appears to still be a belief that exposure to the campaign will lead directly to individual behavior change. Consistent with the conventional HOE, it was also common to position priming steps, such as attitude, knowledge, or belief change, as an important intermediate step between engagement and behavior change. However, this was not always apparent, suggesting that there is an assumption that there is a direct path from engagement to behavior change or that engagement itself is representative of priming steps or behavioral trialing. Second, most social media campaign evaluations are focused on process and engagement measures. This focus suggests a significant variation from the traditional HOE. Engagement is now positioned as critical to the success of social media campaigns, where previously it was awareness. Awareness was measured infrequently, suggesting that either evaluators do not consider it relevant, that it is considered too resource-intensive to measure, or alternatively, there is an implicit assumption that engagement encompasses or is equivalent to awareness. In addition, engagement is defined in many ways, with no consistent attempt to distinguish between engagement types (eg, likes vs shares vs audience interaction). That is, all types of engagement are treated as equal in campaign evaluations. Third, our model suggests that campaign effects no longer operate in a completely linear and sequential manner. It appears that there are multiple points at which campaign effects would circle back and influence earlier steps in the model. For example, engagement was positioned not just as a step toward behavior change but was also seen as important because of its ability to generate word-of-mouth marketing and message amplification, which in turn could increase exposure and lead to more engagement. Similarly, changes in priming steps may increase interest in a health issue, leading to an increase in engagement. Fourth, although very few campaigns in our review targeted social change (eg, increased support for policy change and grassroots advocacy), there is also an assumption that engagement can lead to such changes. These social changes can then lead to behavior change or directly improve health outcomes. This is similar to the alternate HOE model proposed by Hornik and Yanovitzky , which also included alternative pathways to achieving behavior change. Finally, campaign evaluations focused on the positive effects of social media use. However, the risks that come from negative engagement, like facilitating the spread of misinformation, also need to be considered in the model. From the 11,235 studies initially identified, we included 99 studies. These studies were published between 2012 and 2022 and related to 93 campaigns . Of these, 47 were social media only , 9 were digital only , 24 were mass media campaigns , and 13 were social marketing campaigns . Most campaigns were conducted in the United States (n=42), 8 from each of Australia and Canada, 7 were from the United Kingdom, 4 from China, 3 from Italy, 2 from each of Aotearoa New Zealand and Qatar, and 1 from each of Belgium, Brazil, Chile, Denmark, Ghana, India, Indonesia, Malaysia, the Netherlands, Puerto Rico, Saudi Arabia, Vietnam, and Wales. Four were multicountry campaigns. Health issues targeted by the campaigns included smoking or vaping (n=14), sexual health or HIV (n=9), mental health (n=9), cervical cancer screening or HPV vaccination (n=7), COVID-19 (n=7), nutrition and eating disorders (n=7), overweight and obesity (n=5), Alzheimer disease or dementia (n=3), influenza vaccination (n=3), reproductive or antenatal health (n=2), physical activity (n=2), road safety (n=2), skin cancer prevention (n=3), breast cancer screening (n=2), and hepatitis (n=2), while 6 targeted multiple risk factors for chronic diseases. Other issues targeted by single campaigns were alcohol, bowel cancer screening, antibiotic use, type 2 diabetes, injury prevention, family planning, water safety, autism, and the Zika virus. The campaign duration varied from 1 week to 6 years. The most popular platform used in the campaigns was Facebook (n=70), followed by Twitter (n=40), Instagram (n=27), YouTube (n=23), Snapchat (n=3), and WeChat, Sina Weibo, LinkedIn, and Tumblr (n=2 each). Other platforms used by 1 campaign included TikTok, Pinterest, MySpace, Grindr, Squirt, LINE, Zalo, and Flickr, as well as the now-defunct Vine. One campaign created its own social media platform, and 4 did not specify what platforms they used. Approximately half of the campaigns used just 1 platform (n=44) and half used more than one (n=45). Audience interaction was explicitly sought in just 24 campaigns, with the most common technique being to seek user-generated content, such as videos and photos (n=9). Other techniques used included contests, pledges, hashtags, prompting users to like and share content, and crowdsourcing the distribution of campaign material. Almost all (n=81) campaigns set objectives related to awareness raising and individual behavior change. Just 5 campaigns set objectives aimed at social change, while the objectives of 9 campaigns were unclear. With regard to theories and frameworks, 41 campaigns made explicit use of at least one theory or framework in their design or evaluation. The specific theories used varied significantly, but we classified 16 as being individual level, 8 as interpersonal level, and 9 as community level. A total of 14 campaigns used bespoke models or frameworks for practice. No campaign explicitly used the traditional HOE. We found that 68 campaigns had reported process measures (relating to reach, impressions, counts of posts, etc), 24 awareness measures, 73 engagement measures, 30 priming steps measures, 16 behavioral trialing measures, and 25 outcome evaluation measures. No campaigns had measures that spanned all 6 steps in the Chan et al model, while 5 measured 5 of the steps, 13 measured 4 steps, 19 measured 3 steps, and 45 measured 2 steps. Eleven campaigns measured only 1 step. Just under half of the campaigns (n=42) did not measure anything beyond the engagement step, while only 8 campaigns reported measures from priming steps, behavioral trialing, and outcome evaluation only and did not measure process, awareness, or engagement. Social media–only campaigns measured fewer steps on average compared to the other campaign types (2.2 compared to 3.3 for digital-only campaigns, 2.6 for mass media campaigns, and 2.9 for social marketing campaigns). The most common process measures were views or related measures (n=29), reach (n=29), impressions (n=23), and count of posts or tweets (n=10). The most common engagement measures were likes or reactions (n=43), shares or retweets (n=40), comments (n=36), clicks or click-through rate (n=27), and number of followers or fans (n=23). Most campaigns (n=55) did not collect measures that allowed them to determine whether the campaign objective had been met; that is, they were process evaluations only. Similarly, of the 27 campaigns that used an explicit theory or framework, only 10 collected measures that aligned with the theory or framework. For example, 6 campaigns used the Health Belief Model; while the Health Belief Model posits that there are necessary precursor steps toward behavior change such as self-efficacy and perceived susceptibility, only 2 of these campaigns reported measuring these concepts in their evaluation. Using our review findings, we developed a model of effects for campaigns using social media . The model is based on a few key observations, combined with our understanding of the traditional HOE model and social media. First, given that most campaigns set individual behavior change objectives and individual-level theories and frameworks were the most common, there appears to still be a belief that exposure to the campaign will lead directly to individual behavior change. Consistent with the conventional HOE, it was also common to position priming steps, such as attitude, knowledge, or belief change, as an important intermediate step between engagement and behavior change. However, this was not always apparent, suggesting that there is an assumption that there is a direct path from engagement to behavior change or that engagement itself is representative of priming steps or behavioral trialing. Second, most social media campaign evaluations are focused on process and engagement measures. This focus suggests a significant variation from the traditional HOE. Engagement is now positioned as critical to the success of social media campaigns, where previously it was awareness. Awareness was measured infrequently, suggesting that either evaluators do not consider it relevant, that it is considered too resource-intensive to measure, or alternatively, there is an implicit assumption that engagement encompasses or is equivalent to awareness. In addition, engagement is defined in many ways, with no consistent attempt to distinguish between engagement types (eg, likes vs shares vs audience interaction). That is, all types of engagement are treated as equal in campaign evaluations. Third, our model suggests that campaign effects no longer operate in a completely linear and sequential manner. It appears that there are multiple points at which campaign effects would circle back and influence earlier steps in the model. For example, engagement was positioned not just as a step toward behavior change but was also seen as important because of its ability to generate word-of-mouth marketing and message amplification, which in turn could increase exposure and lead to more engagement. Similarly, changes in priming steps may increase interest in a health issue, leading to an increase in engagement. Fourth, although very few campaigns in our review targeted social change (eg, increased support for policy change and grassroots advocacy), there is also an assumption that engagement can lead to such changes. These social changes can then lead to behavior change or directly improve health outcomes. This is similar to the alternate HOE model proposed by Hornik and Yanovitzky , which also included alternative pathways to achieving behavior change. Finally, campaign evaluations focused on the positive effects of social media use. However, the risks that come from negative engagement, like facilitating the spread of misinformation, also need to be considered in the model. Our review shows that the traditional HOE model has certain deficiencies when it comes to describing a conceptual pathway of effects for public health social media campaigns. We propose that the model needs to reflect changes in the apparent value or contribution of existing concepts as well as add new concepts and pathways not previously described in the HOE. We hope that our model will be of use to campaign designers and evaluators. As with the traditional HOE, it allows for targeting and tailoring of campaign messages and highlights measures and indicators that can or should be captured in an evaluation. Equally, it highlights underlying assumptions in social media campaigns and raises questions as to the accuracy of those assumptions. This will allow researchers to test those assumptions and modify and improve on our initial model. In turn, this should lead to improved campaign practice in public health and beyond. An important assumption underpinning our model and current social media campaign practice is that the nebulous concept of “engagement” is critical to success. Our model reflects the current assumption that engagement with the campaign is the focal point, with all outcomes dependent on generating engagement. Others have also noted how frequently engagement is reported in evaluations , and determining what content generates engagement is a frequent subject of research . When considered alongside our findings, this suggests that engagement is sometimes seen as the ultimate goal of campaigns rather than an intermediate step to achieving health outcomes. However, with so few studies measuring concepts beyond engagement, it is difficult to assess the real importance of engagement. The focus of social media evaluations needs to shift from engagement to the other effects in the model, including priming steps, social effects, and behavior change. There is already discussion in the literature that can provide some insight into how this might be done . Some have proposed that there are different levels of engagement, which vary in intensity and feeling toward social media content . These different levels of engagement may lead to differential outcomes for a social media campaign, but we noted no consistent attempt at exploring this question. This may be because of an assumption that all engagement is “good.” However, it is well established that misinformation, disinformation, trolling, and other forms of interaction on social media can cause harm . Our review shows that public health campaigns do not routinely consider negative engagement with the campaign and what impact it has on the audience or on the message of the campaign. Where negative engagement occurs, it is plausible that the campaign may fail to achieve its objectives or have the paradoxical effect of making some audience members worse off . Public health campaigns are also known to sometimes have a “success to the successful” effect, whereby the position of better-off groups is improved by exposure to the campaign while the situation of marginalized or disadvantaged groups is left unchanged or worsened . For example, health promotion campaigns in tobacco control and nutrition have sometimes been found to increase health inequities even while decreasing overall rates of ill-health . Further research is needed to consider how health promotion campaigns on social media may have similar negative effects and how exposure to campaigns translates to real-world change. We need to explore whether all web-based engagement is equal and good. If it is not, there may be a hierarchy of types of engagement, which may be context-dependent. Research into these questions may help improve the conceptual model we have developed and ultimately improve campaign practice. The role that social media companies play in shaping what is perceived to be important for evaluations must be considered. They provide a near-endless stream of data to campaigners, much of it related to engagement, making it easily available and analyzable, which in turn encourages evaluators to center engagement as an important measure of success without formally assessing whether this is the case. We acknowledge that just because engagement (or any other measure) is commonly reported does not mean that campaigners feel it is the most important or relevant measure. However, frequent reporting, coupled with the availability and ease of collecting engagement data, may have created a cycle that continuously inflates the apparent importance of engagement. Equally, the absence of reporting does not suggest that a step is not assumed to be present. For example, priming steps were infrequently measured, but the prominence of individual-level theories suggests that these steps are still considered important, even though they may not always be measured. Similarly, while awareness was infrequently reported by comparison to exposure and engagement, this may be because it is thought to be less meaningful on social media or that it is the same as engagement, or it may reflect the fact that it is more difficult to assess. This may change if social media companies change what data they make available to campaign planners. Meta (Facebook’s parent company), for example, already provides some businesses with an ability to measure awareness through what they call “brand lift” studies . These studies select users who fit the campaign’s target audience and randomly assign them to exposure or control before surveying them on ad recall, brand awareness, and message association. Should tools like this become more widely available, we may see a shift in the number of social media health campaigns reporting awareness. All this raises the question of whether we are focusing on engagement because it matters or because it is easy to measure? In this way, we are echoing the criticisms of the traditional HOE and the primacy that that model gives to awareness . We found that almost all campaigns included in our review were awareness-raising or individual behavior-change campaigns. In other words, they were not making use of the “social” element of social media. The potential for using social media to generate social movements and shift social norms and structures has been highlighted elsewhere , but our findings indicate that very few campaigns have attempted to realize this potential. Given this, it is possible that our model does not accurately or completely capture potential social effects and relevant pathways. More campaigns that aim to have social effects are needed, along with rigorous evaluation and reporting. The interactional rather than transactional nature of our proposed model of social media campaign effects brings to mind recent developments in the application of systems thinking to health issues . The use of systems approaches is at an early stage of development, but in recent years researchers have begun to turn their attention to the application of systems thinking in social marketing and more specifically to social media . Our proposed model in is relatively simple, with tight boundaries to emphasize the newly introduced concepts. In a web-based version, our model has also been rendered as a more expanded systems map to illustrate possible future directions for this work . explains the model in more detail. A strength of this study is that we adopted a systematic approach to reviewing evidence and included campaigns from across numerous areas of public health. This should boost the generalizability of our model within health. A limitation of our study is that we did not consider scale in developing the model. Social media can be used to target small communities, as well as for mass-reach campaigns, so further research should explore whether the effects or the pathway of effects vary depending on the scale of the campaign. Additionally, while we did not exclude studies written in languages other than English, our use of English search terms would likely mean that some relevant campaigns have been missed. Similarly, as most campaigns were conducted in high-income, English-speaking campaigns, the generalizability of our model outside of these countries will need to be explored further. Finally, as research lags practice in this space, some of the most popular social media platforms (eg, Tik Tok) are underrepresented in our review. The way that campaigns work or are theorized to work on these platforms may differ from what is represented in our model. This highlights the need to regularly review and update our model as new information comes to light. Our review shows that the traditional HOE that underpins health communication campaigns needs to be updated to reflect the nature of social media. The model we have developed is intended as a first step to addressing the shortcomings of the traditional HOE and assisting campaign planners and evaluators in developing and evaluating campaigns. Further testing of the model is essential, however, especially in relation to the role of engagement in the conceptual pathway. |
Nobel Prize for physiology or medicine in 2023: how to dupe the cellular innate immune system using modified RNA for therapeutic treatment | fd783cf4-0dae-4c8a-a2c6-f0617ad8c8de | 10758357 | Physiology[mh] | |
The Role of Quercetin as a Plant-Derived Bioactive Agent in Preventive Medicine and Treatment in Skin Disorders | 5df83536-a24c-440d-b893-cb949d8607cb | 11243040 | Preventive Medicine[mh] | The directions of development in the food industry are strictly connected to the dominating trends in the food sector. Currently, worldwide tendencies point to sustainable development, with a focus on lab-grown food. It involves transferring production from the natural world to the laboratories in the face of growing ecological challenges like environmental degradation, climate change, and shrinking natural resources. The trend reflects the increased awareness and expectations of customers regarding sustainable development, and is a potential solution to the issue of raw materials recovery in the situation of the global raw material shortage and the growing problems with waste management . The food industry fits perfectly into the latest developments with its new technologies for utilizing waste materials or enriching food with compounds obtained from waste . That, in turn, results in further advancements like the health-promoting qualities of food, or the reduction and/or management of byproducts, particularly those from plant production, which are used as the main source of health-benefitting bioactive compounds like polyphenols . Quercetin (Que), 3,3′,4′,5,7-pentahydroxyflavone, is an organic compound in the group of flavonols which are a class of flavonoids. In nature, quercetin occurs as the aglycone of flavonoid glycosides forming isoquercetin with glucose, rutin with rutinose, hyperoside with galactose, and quercetrin with rhamnose . Quercetin-3-O-glucoside, naturally occurring as a yellow plant pigment, is the most common Que glycoside . The compound is sparingly soluble in water, and soluble in alcohol , lipids and organic solvents . Quercetin is not synthesized in the human body . Quercetin is widely distributed in many plant-derived products; foods high in quercetin include, e.g., onion, capers, grapes or berries. Depending on the dietary habits, some of them can be the main sources of quercetin. However, the concentration of bioactive compounds, including quercetin, depends on numerous factors: plant maturity, harvest time, and farming techniques . It should be noted that byproducts contain up to several times more quercetin than edible parts of plants and therefore, waste may be a good source of quercetin and other bioactive compounds used in food, chemical, beauty, and pharmaceutical industries. presents the concentration of quercetin in foods and food industry byproducts. Quercetin is highly valued for the health benefits it offers . Its most important properties include the ability to neutralize free radicals, and its anticancer, anti-diabetic, anti-aging, antibacterial and anti-inflammatory effects observed even in cases of inflammation associated with chronic diseases . Additionally, a positive impact on human skin has been confirmed in research . Due to its beneficial health effects and the fact that it is easily available from plants and food industry byproducts, quercetin has the potential to be used in medicine, especially in disease prevention, treatment of skin diseases and injuries. Thus, the therapeutic potential of quercetin requires a summary providing an insight into the research conducted to date, especially into its impact on biochemical mechanisms and expression of enzymes and proteins, which is the main objective of this paper. Quercetin prevents degradation of collagen due to UV radiation in human skin and inhibits MMP-1 and COX-2 expression. Quercetin inhibits UV-induced AP-1 activity and NF-κB . Additionally, quercetin may reduce phosphorylation of ERK, JNK, and AKT, and STAT3Kinase assays using purified protein demonstrated quercetin’s ability to directly inhibit the activity of PKCδ and JAK2. This suggests its possible direct interaction with PKCδ and JAK2 in the skin, counteracting UV-induced aging. . Research by Vicentini et al. confirmed reduction in skin irritation due to UV radiation by inhibition of NF-κB and such inflammatory cytokines as IL-1β, IL-6, IL-8, and TNF-α. Kim et al. point to the fact that quercetin in propolis reduces PDK-1 and AKT phosphorylation, which suggests its efficacy in preventing UV-induced photoaging. Furthermore, when combined with caffeic acid ester and apigenin, quercetin reduces PI3K activity, further enhancing its protective effect. Findings from the clinical study conducted by Nebus et al. , in which oak quercetin formulated into an SPF 15 protective cream was used, demonstrated its protective effect on collagen and elastin with sustained proper degradation of waste proteins in aging skin. Patients in the study observed considerable improvement in skin parameters such as wrinkling, elasticity, smoothness, skin radiance and moisture. The anti-aging effects of quercetin and its derivative, quercetin caprylate, were demonstrated in studies conducted by Chondrogianni et al. . The substances showed the ability to activate the proteasome, which is responsible for the degradation of damaged proteins in human cells. Its activation is crucial for increasing resistance to oxidative stress and improving the viability of HFL-1 fibroblasts. Both compounds contributed to the rejuvenation of aging fibroblasts. which indicates their potential as anti-aging ingredients. The research into using quercetin for UV protection started with studies of its function in plant tissues . Choquenet et al. described the increased level of photoprotection within the UVA range displayed by quercetin and its derivative (rutin) used in oil-in-water emulsion with 10% concentration. The effect was fortified when the flavonoids were used in association with titanium dioxide. Rajnochová-Svobodová et al. demonstrated that quercetin and its derivative, taxifolin (dihydroquercetin), effectively reduce UVA-induced damage in skin fibroblasts and epidermal keratinocytes. These antioxidants prevent the formation of reactive oxygen species (ROS), depletion of GSH, activation of CASP3, and increase the expression of antioxidant proteins: HO-1, NQO1 and CAT. In a direct comparison, quercetin proved to be more effective than taxifolin, although at the highest concentration, it exhibited pro-oxidant properties. Additionally, Liu et al. highlight the potential applications of dihydroquercetin in the treatment and prevention of skin diseases, including those caused by solar radiation, further emphasizing the importance of this substance in dermatology. The protective effects against UVB radiation were the subject of studies by Vicentini et al. . When quercetin was topically applied to the skin in the form of a water-in-oil microemulsion, it effectively penetrated the deeper layers of the epidermis without causing irritation. It significantly limited the depletion of glutathione induced by UVB exposure and reduced the activity of ultraviolet-induced metalloproteinase. The findings were subsequently confirmed in Casagrande et al. , where topically applied quercetin also mitigated glutathione depletion and reduced the activity of metalloproteinase and myeloperoxidase. Furthermore, research conducted by Zhu et al. demonstrated that quercetin affected UVB-induced cytotoxicity in epidermal keratinocytes. The antioxidant blocked the production of reactive oxygen species (ROS) caused by radiation. By scavenging reactive oxygen species, quercetin protected the cell membrane and mitochondria, and slowed the leakage of cytochrome c, inhibiting keratinocyte apoptosis. The primary mechanism responsible for pigmentation and skin color is melanogenesis; thus, halting, slowing and counteracting this process is fundamental for skin lightening and whitening efforts . Based on in vivo and in vitro studies, Choi and Shin did not demonstrate a whitening effect of quercetin in cosmetic products. However, quercetin plays a crucial role in blocking melanogenesis by inhibiting tyrosinase, a key enzyme that activates the pigmentation process . Nonetheless, other studies indicate that quercetin can increase tyrosinase activity and promote melanogenesis , which may result from the concentration-dependent activity. Findings of the above studies seem to suggest that pure quercetin at concentrations higher than 20 μM reduces melanin concentration, while at concentrations of 10–20 μM, it increases melanin content. Melanin content also decreases at concentrations of 50–100 μM, but this fact seems to be associated with increased cellular toxicity, which can be reduced by using quercetin in combination with vitamin C and arbutin . Additionally, the anti-melanogenic effect depends on the position of hydroxyl groups (-OH) in the compound structure and the position and type of sugar residues in quercetin derivatives . Conversely, studies by Chondrogianni et al. on quercetin and its derivative, quercetin caprylate, showed that both compounds induce changes in the physiological characteristics of cells, including a localized whitening effect. One of the earliest analyses regarding the impact of quercetin on reducing dermal wound healing time in rats was conducted by Gomathi et al. . The authors divided the studied rats into a control group and a group treated with quercetin incorporated into collagenous matrix. The group subjected to quercetin treatment exhibited a greater potential to quench free radicals in the active inflammatory processes in skin wounds. Consequently, the study pointed out the quercetin potential as a wound healing-promoting agent when used in dressing material. Gopalakrishnan et al. emphasized the significant role of TGF-β1 and VEGF activation by quercetin in the wound healing acceleration process. Two groups of rats: a control group and a Que-treated group were subject to evaluation for the period of 14 days. The quercetin-treated group exhibited a faster rate of wound closure compared to the control group. In addition to activating the above-mentioned compounds, quercetin significantly attenuated TNF-α activity, supported fibroblast proliferation processes and collagen activity, and induced IL-10 levels. A similar study was conducted by Kant et al. where 80 rats divided into a 4 groups (including a control group) were treated with dimethyl sulfoxide solutions and Que concentrations ranging from 0.03% to 0.3%. After 20 days, it was found that the administration of the polyphenol at the highest dose of 0.3% resulted in faster formation of granulation tissue, allowing for quicker wound closure by supporting angiogenic and proliferative processes within fibroblasts. Quercetin also contributed to the induction of, e.g., IL-10, VEGF, TGF-β1, simultaneously reducing TNF-α activity. The mechanism of quercetin impact on the improvement in wound healing quality, was investigated by Doersch and Newell-Rogers . Quercetin was found to exhibit the potential to reduce fibrosis without affecting the rate of healing. Quercetin reduced the level of integrin αV while increasing the expression of integrin β1. These changes in transmembrane receptor expression may affect processes such as cell proliferation and extracellular matrix production. Additionally, the presence of quercetin reduces the demand for extracellular matrix, thus contributing to easier wound healing and scar reduction. Moravvej et al. points out quercetin’s potential for scar treatments. Long et al. showed that quercetin combined with X-ray radiation reduces collagen synthesis in fibroblasts, both healthy and scarred. Continuing their research on the properties of quercetin, Hosnuter et al. , demonstrated in a randomized controlled clinical trial the efficacy of onion extract containing quercetin in reducing scar hypertrophy, while not affecting itchiness. Further studies conducted by Ramakrishnan et al. focused on the synergistic action of quercetin with vitamin D3 on isolated keloid fibroblasts. Quercetin was found to reduce cell proliferation and collagen synthesis while inducing apoptosis. In addition, Si et al. demonstrated that quercetin can suppress keloid resistance to radiotherapy by inhibiting the expression of the HIF-1 and interacting with the PI3K/AKT pathway, reducing AKT phosphorylation. Yin et al. demonstrated the effectiveness of quercetin in the process of pressure ulcer healing. Their in vitro and in vivo analyses of induced pressure ulcers in mice provided evidence as to quercetin participation in stimulating processes responsible for wound edge closure by inhibiting interference with the MAPK pathway. Additionally, quercetin was found to reduce cellular infiltration and decrease the concentration of inflammatory cytokines. As suggested by Karuppagounder et al. , Beken et al. , and Hou et al. , the anti-inflammatory and antioxidant properties of quercetin can be utilized in the treatment of atopic dermatitis (AD). The course and development of AD are associated with the action of epithelial-derived cytokines . In studies conducted by Beken et al. , the use of quercetin in prepared keratinocytes resulted in a decrease in the expression of cytokines such as IL-1β, IL-6, IL-8, and TSLP, and a simultaneous increase in the expression of SOD1, SOD2, GPX, CAT, and IL-10. An increase in mRNA expression of E-cadherin and occludin was also observed, along with a decrease in the expression of matrix metalloproteinases: MMP-1, MMP-2, and MMP-9. Additionally, inhibition of ERK1/2 phosphorylation, and MAPK was noted, as well as decreased expression of nuclear transcription factor NF-κB, with no effect on STAT6. The mechanism of action of quercetin had been previously determined by Cheng et al. , who focused on its impact on retinal pigment epithelial cells. Hou et al. demonstrated that quercetin effectively lowers expression levels of cytokines such as CCL17, CCL22, IL-4, IL-6, IFN-γ, and TNF-α. Quercetin demonstrates the ability to inhibit pro-inflammatory cytokines induced by Propionibacterium acnes bacteria. Lim et al. showed that it suppresses TLR-2 production and inhibits phosphorylation of p38, ERK, and JNK MAPK kinases. Additionally, a decrease in mRNA levels for MMP-9 was observed. In vivo studies showed quercetin-induced reduction in thickness of erythema and swelling. Liu et al. developed quercetin-loaded liposomes in gels (QU-LG) to investigate their possible therapeutic effect on cutaneous eczema due to favorable antioxidant activity and anti-inflammatory effects of quercetin. Que was encapsulated in liposomes and evenly dispersed in sodium carboxymethyl cellulose hydrogels in order to enhance its bioavailability and efficiency of its dermal delivery. The research demonstrated that quercetin-containing liposomes-in-gel (QU-LG) applied to the skin of mice suffering from skin eczema exhibited good stability and adhesion to the skin. In the antioxidant test, QU-LG inhibited the production of malondialdehyde (MDA) in the liver better than the commercially available drug, dexamethasone acetate cream. Compared to untreated mice, mice treated with QU-LG showed a statistically significant reduction in dermatopathological symptoms. The results suggest that QU-LG exhibits good antioxidant activity in vivo and in vitro and that it can be used in the prevention and treatment of cutaneous eczema. Maramaldi et al. and Kurek-Górecka investigated the use of phytosomes, which are alternatives to liposomes, as formulations enhancing the bioavailability of active ingredients. Quercetin in the form of phospholipids significantly reduced erythema and wheal diameter . The study of Lu et al. into quercetin-loaded niosomes described improved solubility, photostability, and skin penetration ability compared to conventional methods of delivering active ingredients. Li et al. developed a xerogel utilizing polyvinyl alcohol (PVA) and quercetin-borate nanoparticles as the crosslinking agent. The produced xerogel films exhibited high bacteriostatic properties, high antioxidant potential, and accelerated skin regeneration. Nalini et al. compared the effects of quercetin and quercetin-loaded chitosan nanoparticles on the healing processes of open wounds. An accelerated healing process was observed in the studied rodents, due to inhibition of inflammatory cytokines, promotion of angiogenesis, and inhibition of free radicals. Increased levels of hydroxyproline and hexalin indicated enhanced reepithelialization. Quercetin used in monotherapy showed lower effectiveness than the mixture of quercetin-loaded alginate. Chitosan nanoparticles with a concentration of 0.075% showed the highest efficacy. Yang et al. investigated the association between quercetin and histamine, a compound that triggers inflammation. The study demonstrated a direct interaction between histamine H4 receptors and quercetin. It was found that quercetin inhibits IL-8 mRNA expression in keratinocyte and the scratching behavior-induced compound 48/80. Additionally, quercetin reduces calcium influx (Ca 2+ ) induced by the H4 receptor through the TRPV1 channel, which limits itching, inflammation, and discomfort sensation. Studies conducted by Katsarou et al. did not show a beneficial effect of quercetin on sodium-lauryl-sulfate-induced skin irritation. It was observed that quercetin did not restore the protective barrier function of the skin, normalize transepidermal water loss, nor reduce erythema to levels observed before irritation. Due to the more challenging wound healing process in cases of diabetes, studies have investigated the wound-healing potential of quercetin in diabetic rats . Fu et al. administered quercetin-containing medication to diabetic rats at various concentrations and observed reduced levels of inflammatory cytokines and the number of iNOS-positive cells, as well as increased activity of CD206-positive cells and intensified angiogenesis processes. The researchers attributed these effects to the influence of quercetin on inducing a shift in macrophage phenotype towards M2. Kant et al. also evaluated the effectiveness of quercetin on wounds in diabetic rats. The applied quercetin (0.3%) ointment accelerated wound healing time and induced, e.g., VEGF and TGF-β, at the same time reducing levels of MMP-9 and TNF-α. The researchers also observed higher levels of GAP-43 resulting from polyphenol application. Numerous studies have described not only the anti-inflammatory and antioxidant effect of quercetin on skin but also its general antioxidant activity . Experimental studies by Tang et al. confirmed the effectiveness of quercetin in inhibiting TNF-α, IL-1β, and IL-6 cytokines. The theoretical calculations illustrated that the oxygen atom on B rings may be the main site of electron cloud density changes, which results in ROS scavenging effects of quercetin. Ha et al. demonstrated, that quercetin 3-O-β-D-glucuronide possesses protective effects on skin, including anti-inflammatory and antioxidant actions against UVB- or H 2 O 2 -induced oxidative stress. It reduces the expression of pro-inflammatory genes (COX-2, TNF-α) in stressed HaCaT cells as well as increasing Nrf2 expression and inhibiting melanin production in α-MSH-treated B16F10 cells. Quercetin, combined with rutin and curcumin, loaded on porous copper oxide nanorods, exhibits not only anti-inflammatory properties but also bacteriostatic and bactericidal properties. Mansi et al. demonstrated Que nanocomposite’s effective antagonism against bacteria such as Staphylococcus aureus , Bacillus subtilis , Salmonella typhi , Pseudomonas aeruginosa , Escherichia coli , and Klebsiella pneumoniae . The inhibitory action against Pseudomonas aeruginosa and Staphylococcus aureus was confirmed by Chittasupho et al. . Ramzan et al. utilized nanoparticles composed of quercetin and its copper complex, employing polycaprolactone (PCL) as a structural material, for treating skin infections. Studies showed that the nanoparticles have strong bactericidal effect against Staphylococcus aureus and stimulate epidermal regeneration without causing skin irritation. This points to the potential use of quercetin as an active agent in the treatment of impetigo and as an alternative to widely used ciprofloxacin. Quercetin delivered in the form of oil-based nanostructured lipid carriers also showed efficacy against Staphylococcus aureus . Lúcio et al. described the synergistic action of quercetin with omega-3 fatty acids, where bioactives in the form of nanostructured lipid carriers and hydrogels were exhibited with high stability and skin permeability. Quercetin, combined with other antioxidants, occurring in the extract of sumac ( Rhus coriaria ), shows strong antibacterial activity. Gabr and Alghadir demonstrated the antibacterial effect of the extract against Staphylococcus aureus and Pseudomonas aeruginosa . Additionally, the extract reduced inflammation, regulated the activity of MMP-8 and MPO enzymes, supported wound contraction, and promoted collagen and hydroxyproline deposition. Extracts isolated from Syncarpia hillii leaves, containing the quercetin glycoside (quercitrin), also exhibited high antibacterial activity against both Gram-positive and Gram-negative bacteria, including staphylococci and Enterococcus faecalis . The leaves are used in traditional herbal medicine for treating wounds and skin infections . Extracts rich in quercetin and its glycosides from the Opuntia genus ( Opuntia Spp.) also show antibacterial properties in wound treatment , as do extracts from Bridelia ferruginea and Spermacoce princeae , which additionally exhibit UV-protective effects . Quercetin exhibits antibiofilm effectiveness. Biofilms are bacterial aggregations that can grow on different surfaces. Biofilms colonize wounds, protect the pathogen from host defenses and obstruct antibiotic delivery, thereby weakening wound healing . Mu et al. claim that quercetin and other secondary metabolites isolated from plants have demonstrated varying levels of biofilm inhibition in Gram-negative pathogens. In their studies quercetin and extracts rich in this bioactive compound exhibited a concentration-dependent reduction in Staphylococcus epidermidis biofilm formation. Quercetin reduced biofilm formation up to 95.3% at the concentration of 500 μg mL −1 . Gopu et al. tested quercetin in case of biofilm formation of different pathogens, responsible for food spoilage. Their studies, conducted for quercetin at different concentration (20–80 μg mL −1 ), showed 13–72%, 8–80%, and 10–61% reduction in biofilm biomass of K. pneumoniae , P. aeruginosa , and Y. enterocolitica, respectively. In their studies, Musini et al. highlighted the antibiofilm activity of quercetin against the drug-resistant pathogen S. aureus . Since S. aureus is an important virulence factor influencing its persistence in both the environment and the host organism and is responsible for biofilm-associated infections, the antibiofilm activity of quercetin is a promising solution for antibiotic-resistant strains of this bacteria. The effectiveness of quercetin and its derivatives in treating skin diseases may be compromised by its absorbability . Studies conducted by Hung et al. have demonstrated that quercetin is better absorbed through photoaged skin, likely because UV exposure disrupts the barrier functions of the skin . Lin et al. compared the transdermal absorption levels of quercetin and its derivatives, including polymethoxylated quercetin (QM). It was shown that the structure of the compound has a crucial impact on transdermal absorption. Derivatives with higher lipophilicity more easily penetrated the skin barrier. Furthermore, the sugar moiety of glycosides significantly affects skin permeability, with those having -OH groups potentially forming hydrogen bonds with ceramides in the epidermis. QM demonstrated the highest level of permeation, suggesting it as the best delivery form of quercetin for topical applications. Quercetin is considered safe for daily consumption by food safety authorities, including the U.S. FDA—Food & Drug Administration . The average daily intake of quercetin from food is approximately 20–40 mg. The daily dosage of quercetin in dietary supplements ranges from 50 to 500 mg. Detection of this compound in plasma is possible about 15–30 min after consuming a 250 mg or 500 mg chewable quercetin preparation. The maximum concentration is reached after 120–180 min (levels return to baseline after 24 h) . In studies, doses of 500–1000 mg of quercetin are typically used. It has been proven that such an amount of quercetin, up to 1000 mg, taken over several months does not adversely affect blood parameters, liver and kidney function, or serum electrolyte levels . In addition to many health benefits, potential health risks associated with the use of quercetin have also been noted . Studies have shown a negative impact of quercetin supplementation for neurodegenerative prevention, aimed at protecting nerve cells. It has been demonstrated that high exposure to quercetin can lead to a reduction in intracellular glutathione levels, as well as changes in the genes responsible for intracellular processes . Different studies suggest potential cytotoxic activity of quercetin, induced by inhibiting the action of specific genes in the presence of gamma radiation doses . In conclusion, it is important to emphasize that the risks associated with quercetin use are minimal compared to its benefits. Long-term consumption of quercetin is not recommended, however, for individuals prone to hypotension (low blood pressure) and those with impaired blood clotting . Quercetin exhibits many health-promoting, antioxidant, and therapeutic properties . While research confirms the potential use of its various forms (e.g., extract, emulsion, aqueous extracts) in skin therapy, the variety of methods for obtaining quercetin even broadens its possible application in medicine. The cited studies suggest that research on the use of natural sources of quercetin in treatment of skin diseases, specifically reducing oxidation processes, aging, melanogenesis, scarring, accelerating wound healing, and protection against UV radiation, is well founded. Advances in the research will undoubtedly contribute to the development of new dermatological preparations and therapies. Among many studies in the field of development new materials, many of them mention incorporating quercetin or extracts containing this compound into wound healing and treatment of skin diseases . This direction seems to be the closest to commercialization and therefore the appearance of products (hydrogels, patches, wound dressings etc.) should take place in the near future. Obtaining quercetin from waste from the food industry and constantly improving technologies and extraction techniques will allow for greater availability of this compound, not only in medicine, but also in pharmaceuticals and the food sector. |
TGFβ in malignant canine mammary tumors: relation with angiogenesis, immunologic markers and prognostic role | 9cab6401-daa2-4885-8b99-dd811f2ac236 | 11340227 | Anatomy[mh] | Introduction Transforming growth factor-β (TGFβ), a multitasking cytokine expressed in a variety of tissues, exert its activities through 2 serine-threonine kinases receptors: TGFβRI and TGFβRII (Derynck et al. ; Sigal ; Principe et al. ; Hu et al. ). Once the ligand is activated, TGFβ signaling is mediated through SMAD and non-SMAD pathways. The SMAD signaling pathway requires the phosphorylation and subsequent translocation of SMAD complexes to the nucleus where it interacts with transcriptional co-regulators and other factors to mediate target gene expression or repression (Shi and Massagué ; Hata and Davis ; Hu et al. ; Tzavlaki and Moustakas ). Although less frequent, the non-SMAD pathways contribute to cell proliferation, motility, and survival through p38 MAPK, p42/p44 MAPK, Rho GTPase, PI3K/Akt signaling activation (Hong et al. ; Mu et al. ). TGFβ actively participate in key biological functions related to homeostatic cellular pathways (including apoptosis, proliferation and immunity) (Flanders et al. ), and is critically important for mammary morphogenesis and secretory function through specific regulation of epithelial proliferation, apoptosis, and extracellular matrix (Moses and Barcellos-Hoff ). Nevertheless, increasing evidence suggests that TGFβ signaling plays also an important role in malignant transformation in breast cancer, participating in cancer cell migration, survival and angiogenesis (Gupta et al. ; Moses and Barcellos-Hoff ; Chen et al. ; Ding et al. ; Zhao et al. ). TGFβ demonstrates a paradoxical role in malignant mammary tumor process. In early stages of carcinogenesis, this cytokine seems to restrain growth and serves as a tumor suppressor. However, with the development of malignancy, TGFβ becomes a promoter of tumor cell invasion and metastasis (Dumont and Arteaga ; Bierie and Moses ; Principe et al. ; Colak and Ten Dijke ). For instance, the dysregulation of TGFβ pathways in breast cancer have been correlated with disease progression, allowing cancer cells to warrant their own survival (Dumont and Arteaga ; Chen et al. ; Juang et al. ; Xie et al. ). Furthermore, TGFβ seems to shape the tumor microenvironment and, when produced in excess by tumor cells, act in a paracrine manner on the peritumoral stroma, tumor neovessels and immune system resulting in increased cell–matrix interaction and angiogenic activity and suppressed immune surveillance which fosters tumor development (Gorsch et al. ; Bao et al. ; Lang et al. ; Ding et al. ; MaruYama et al. ). By avoiding the tumor-suppressive roles of TGFβ, mammary cancer cells can take advantage of its potent immunosuppressive functions. For instance, TGFβ signaling in T cells represses both their inflammatory and cytotoxic differentiation programs (Dumont and Arteaga ; Padua and Massagué ; Liu et al. ; van den Bulk et al. ; MaruYama et al. ). In addition to impairing T cells effector functions, TGFβ plays a pivotal role in the generation of regulatory T cells (Tregs) from a population of peripheral CD4 + CD25 - T cells through the induction of the key transcription factor FoxP3 (Fantini et al. ; Chen and Konkel ). In human breast cancer, TGFβ and FoxP3 share signaling pathways with a crucial impact in several tumor hallmark steps, including angiogenesis, facilitating nutrient exchange and metastasis (Gupta et al. ; Padua and Massagué ; Chen and Konkel ; Wang et al. ; Lainé et al. ). Both TGFβ and FoxP3 are reported to be sufficient to upregulate the expression of vascular endothelial growth factor (VEGF), one of the most selective and potent angiogenic factors known, attracting adjacent endothelial cells and promoting the formation of tumor neovascularization (Donovan et al. ; Gupta et al. ; Kajal et al. ). In human breast cancer, the role of TGFβ among the different tumor sub-types has been a subject of interest. TGFβ seems to have a tumor suppressor effect mainly in luminal breast cancer and initial stages of tumors. On the other hand, in HER2 + and triple negative sub-types seems to have a pro-tumorigenic effect (Tang et al. ; Wilson et al. ; Parvani et al. ). In a recent study, Vitiello et al. (Vitiello et al. ) suggested that TGFβ signaling exert tumor-suppressive effects in luminal-B-HER2 + and p53-negative in breast cancers. Additionally, in humans TGFβ and FoxP3 have an active role in the VEGF signaling and in tumor angiogenic switch by promoting an increased intratumoral microvessel density, which contributes to mammary carcinogenesis and poor prognosis (Gupta et al. ; Kajal et al. ). Regarding canine mammary tumors (CMT), some contradictory studies were published (Klopfleisch et al. ; Yoshida et al. ). An in vitro study using a mammary gland tumor cell line (CHMp13a) suggested that TGFβ induces invasiveness capacity of the cells (Yoshida et al. ). These findings do not support those of Klopfleisch et al. , who reported that increased tumoral proliferative activity was related to a loss of TGFβ-3 and LTBP-4 coupled with reduced TGFβR-3 expression. Furthermore, Treg cells seems to play a role in CMT development and aggressiveness and may contribute to increased angiogenesis (Carvalho et al. ). Another in vitro study showed an increase in FoxP3 mRNA and protein expression in activated dog lymphocytes stimulated with TGFβ and IL-2. Despite less prominent, tumor cell receptor activation alone induced small increases in FoxP3 expression. All of these results suggest that the regulation of FoxP3 expression in dog and human Tregs is similar (Biller et al. ). However, to the best of our knowledge the prognostic value and the correlation between TGFβ and FoxP3 Treg cells expression in dog mammary tumors has not been investigated yet. To elucidate the potential association of TGFβ and FoxP3 with angiogenesis and clinical outcome in malignant CMT, immunohistochemistry was performed to detect the expression of TGFβ in a series of malignant CMT. We also aimed to assess the correlation between the expression of TGFβ with intratumoral FoxP3 Treg cells, and angiogenesis markers [VEGF expression, microvessel density (MVD)] previously determined in the same tumors and published (Carvalho et al. ). Furthermore, 2 years follow up of the dogs enrolled in this study was performed to determine the overall survival rate. Materials and methods 2.1. Sample selection and clinicopathological analysis A total of 67 female dogs of different breeds, with malignant mammary tumors received for diagnosis and treatment, were included in this study. As reported in our previous study (Carvalho et al. ), all animals (mean age of ∼10 years) were free from distant metastasis at the time of diagnosis (confirmed throughout thorax X-ray and abdominal ultrasound) and were only submitted to surgery (regional or complete mastectomy) as treatment (chemotherapy and/or radiation therapy was not performed). For the analysis, one tumor per animal was selected. In the case of being observed more than one malignant neoplasm per animal, the tumor with the most aggressive clinical and histopathological features (larger size, infiltrative growth, higher grade (Queiroga et al. ) was selected. According to the literature (Queiroga et al. ; ), the clinical stage of the animals was categorized into local (without lymph node involvement) and regional (metastasis at regional lymph nodes). For this classification, the TNM system (Owen and VPH/CMO/80.20 ) was used where T describes the size of the primary tumor (higher diameter), N the presence ( N +) or absence ( N 0) of lymph node metastasis and M the presence ( M +) or absence ( M 0) of metastasis at distant organs. Of note that, tumor size ( T 1 < 3 cm; T 2 ≥ 3 and <5 cm; T 3 ≥ 5 cm) and skin ulceration were also two analyzed parameters. For the clinical follow-up, a physical examination, a radiological evaluation of the thorax and an abdominal ultrasound scan were performed in the animals 15 days after surgery and every 90 days thereafter for a minimum period of 730 days. The time of overall survival (OS) was calculated from the date of surgery to the date of animal death/euthanasia (due to advanced stages of the disease within 730 days) or to the date of the last clinical examination (dogs that survived more than 730 days). For the preparation of this manuscript, no procedure was carried out that was not strictly necessary for the treatment of each animal attended at AniCura CHV Porto Hospital Veterinário and Onevet Hospital Veterinário Porto, both located at Porto, Portugal and under clinical supervision of two clinicians (HG and LL). Only data collection was performed, without interfering with the clinical decisions taken in each case. Informed consent on the collection of samples and the clinical follow-up was obtained from each patient owner. This study was approved by the Scientific Council of the School of Agrarian and Veterinary Sciences, University of Trás-os-Montes and Alto Douro in 2011, as complying with Portuguese legislation for the protection of animals (Law No. 92/1995). 2.2. Histopathological examination Collected samples were fixed in 10% buffered formalin and paraffin-embedded. Tissue sections with 4 μm thickness were stained with hematoxylin and eosin (HE) following routine methods. For diagnosis, each slide was evaluated according to the classification published by Davis–Thompson DVM Foundation (Zappulli et al. ). Furthermore, by using the method proposed by Peña and collaborators (Peña et al. ), the histological grade of malignancy (HGM) was evaluated for each sample. The presence of tumor necrosis, neoplastic intravascular emboli and regional lymph node involvement were also clinicopathological characteristics considered for the analysis. Tumor necrosis was evaluated as presence or absence, as previously described (Carvalho et al. ). 2.3. Antibodies The following antibodies and conditions were used for immunohistochemistry assays: TGFβ [polyclonal antibody against TGFβ1 (sc-146), Santa Cruz Biotechnology, sc-146, Dallas, Texas, USA; 1:100], FoxP3 [(anti-mouse/human Foxp3 antibody, Clone eBio7979 (221D/D3), eBioscience, San Diego, USA; 1:100)], VEGF [(Clone JH121 (MA5-13182), Thermo Scientific, Waltham, MA USA; 1:100)], CD31 [(Clone JC70A Clone (IS610), Dako, Glostrup, Denmark; 1:20)]. 2.4. Immunohistochemistry FoxP3, TGFβ, VEGF and CD31 protein expression in tumors collected from the female dogs were evaluated by immunohistochemistry (IHC). IHC for FoxP3 was performed using a polymeric labeling methodology (Novolink Polymer Detection System; Novocastra, Newcastle, UK) whereas for TGFβ, VEGF and CD31 a streptavidin–biotin–peroxidase complex method with the Ultra Vision Detection System kit (Lab Vision Corporation, Fremont, CA, USA) was used, as previously described by us (Carvalho et al. ). Briefly, deparaffinized and rehydrated slides were submitted to microwave antigen retrieval for 3 cycles of 5 min at 750 W with 0.01 M citrate buffer (pH 6.0). Followed 20 min cooling at room temperature, sections were incubated overnight with the primary antibodies at 4 °C. The antibody reactions were visualized with the chromogen 3,3′-diaminobenzidine tetrachloride (DAB; Dako, Denmark). The slides were counterstained with Gill’s hematoxylin, dehydrated, cleared and mounted. For each immunoreaction, positive and negative controls were included. As negative control, the primary antibody was replaced with an irrelevant isotype-matched antibody. As positive control for TGFβ, intestine sections were used. In the case of FoxP3, sections of canine lymph nodes were used. Liver section and dog angiosarcoma were used for VEGF and CD31, respectively. 2.5. TGFβ, FoxP3, VEGF and CD31 staining evaluation Intratumoral FoxP3, VEGF and CD31 (PECAM-1) used for determining microvascular density, were evaluated using a well-established method already applied in other studies by our group (Queiroga et al. ; Carvalho et al. ; Raposo et al. ; Carvalho et al. ). TGFβ immunoreactivity was evaluated in the intratumoral area by two independent experts that analyzed the entire slides (×200 magnification) using an immunohistochemical semiquantitative method adapted from previous published study (Ding et al. ). The method final score was achieved by the product of the percentage of positive cells (immunolabelling extension) and staining intensity. The percentage of positive cells was scored as 0 (0% positive cells), 1 (<10% positive cells), 2 (10–50% positive cells), 3 (51–80% positive cells), or 4 (>80%) whereas the staining intensity was scored as 1 (weakly stained), 2 (moderately stained), and 3 (strongly stained). Low TGFβ class was considered if the product of multiplication between staining intensity and the percentage of positive cells was ≤ 6. A final immunohistochemical score > 6 indicates a high TGFβ class. 2.6. Statistical analysis Statistical analysis was performed using SPSS software version 27.0 (Statistical Package for the Social Sciences, Chicago, IL, USA). Categorical variables were analyzed using the Chi-square test, while continuous variables were assessed through Analysis of Variance (ANOVA) with Tukey’s multiple means comparison. Correlations were evaluated using Pearson’s correlation test for parametric variables and Spearman’s correlation test for nonparametric variables. Survival curves were constructed using the Kaplan–Meier method with mean values as the cutoff, and differences in survival were analyzed using the log-rank test. Multivariate survival analysis was conducted using Cox regression analysis, including all variables simultaneously via the enter method. All tests were assessed at a 95% confidence level ( p < 0.05). Sample selection and clinicopathological analysis A total of 67 female dogs of different breeds, with malignant mammary tumors received for diagnosis and treatment, were included in this study. As reported in our previous study (Carvalho et al. ), all animals (mean age of ∼10 years) were free from distant metastasis at the time of diagnosis (confirmed throughout thorax X-ray and abdominal ultrasound) and were only submitted to surgery (regional or complete mastectomy) as treatment (chemotherapy and/or radiation therapy was not performed). For the analysis, one tumor per animal was selected. In the case of being observed more than one malignant neoplasm per animal, the tumor with the most aggressive clinical and histopathological features (larger size, infiltrative growth, higher grade (Queiroga et al. ) was selected. According to the literature (Queiroga et al. ; ), the clinical stage of the animals was categorized into local (without lymph node involvement) and regional (metastasis at regional lymph nodes). For this classification, the TNM system (Owen and VPH/CMO/80.20 ) was used where T describes the size of the primary tumor (higher diameter), N the presence ( N +) or absence ( N 0) of lymph node metastasis and M the presence ( M +) or absence ( M 0) of metastasis at distant organs. Of note that, tumor size ( T 1 < 3 cm; T 2 ≥ 3 and <5 cm; T 3 ≥ 5 cm) and skin ulceration were also two analyzed parameters. For the clinical follow-up, a physical examination, a radiological evaluation of the thorax and an abdominal ultrasound scan were performed in the animals 15 days after surgery and every 90 days thereafter for a minimum period of 730 days. The time of overall survival (OS) was calculated from the date of surgery to the date of animal death/euthanasia (due to advanced stages of the disease within 730 days) or to the date of the last clinical examination (dogs that survived more than 730 days). For the preparation of this manuscript, no procedure was carried out that was not strictly necessary for the treatment of each animal attended at AniCura CHV Porto Hospital Veterinário and Onevet Hospital Veterinário Porto, both located at Porto, Portugal and under clinical supervision of two clinicians (HG and LL). Only data collection was performed, without interfering with the clinical decisions taken in each case. Informed consent on the collection of samples and the clinical follow-up was obtained from each patient owner. This study was approved by the Scientific Council of the School of Agrarian and Veterinary Sciences, University of Trás-os-Montes and Alto Douro in 2011, as complying with Portuguese legislation for the protection of animals (Law No. 92/1995). Histopathological examination Collected samples were fixed in 10% buffered formalin and paraffin-embedded. Tissue sections with 4 μm thickness were stained with hematoxylin and eosin (HE) following routine methods. For diagnosis, each slide was evaluated according to the classification published by Davis–Thompson DVM Foundation (Zappulli et al. ). Furthermore, by using the method proposed by Peña and collaborators (Peña et al. ), the histological grade of malignancy (HGM) was evaluated for each sample. The presence of tumor necrosis, neoplastic intravascular emboli and regional lymph node involvement were also clinicopathological characteristics considered for the analysis. Tumor necrosis was evaluated as presence or absence, as previously described (Carvalho et al. ). Antibodies The following antibodies and conditions were used for immunohistochemistry assays: TGFβ [polyclonal antibody against TGFβ1 (sc-146), Santa Cruz Biotechnology, sc-146, Dallas, Texas, USA; 1:100], FoxP3 [(anti-mouse/human Foxp3 antibody, Clone eBio7979 (221D/D3), eBioscience, San Diego, USA; 1:100)], VEGF [(Clone JH121 (MA5-13182), Thermo Scientific, Waltham, MA USA; 1:100)], CD31 [(Clone JC70A Clone (IS610), Dako, Glostrup, Denmark; 1:20)]. Immunohistochemistry FoxP3, TGFβ, VEGF and CD31 protein expression in tumors collected from the female dogs were evaluated by immunohistochemistry (IHC). IHC for FoxP3 was performed using a polymeric labeling methodology (Novolink Polymer Detection System; Novocastra, Newcastle, UK) whereas for TGFβ, VEGF and CD31 a streptavidin–biotin–peroxidase complex method with the Ultra Vision Detection System kit (Lab Vision Corporation, Fremont, CA, USA) was used, as previously described by us (Carvalho et al. ). Briefly, deparaffinized and rehydrated slides were submitted to microwave antigen retrieval for 3 cycles of 5 min at 750 W with 0.01 M citrate buffer (pH 6.0). Followed 20 min cooling at room temperature, sections were incubated overnight with the primary antibodies at 4 °C. The antibody reactions were visualized with the chromogen 3,3′-diaminobenzidine tetrachloride (DAB; Dako, Denmark). The slides were counterstained with Gill’s hematoxylin, dehydrated, cleared and mounted. For each immunoreaction, positive and negative controls were included. As negative control, the primary antibody was replaced with an irrelevant isotype-matched antibody. As positive control for TGFβ, intestine sections were used. In the case of FoxP3, sections of canine lymph nodes were used. Liver section and dog angiosarcoma were used for VEGF and CD31, respectively. TGFβ, FoxP3, VEGF and CD31 staining evaluation Intratumoral FoxP3, VEGF and CD31 (PECAM-1) used for determining microvascular density, were evaluated using a well-established method already applied in other studies by our group (Queiroga et al. ; Carvalho et al. ; Raposo et al. ; Carvalho et al. ). TGFβ immunoreactivity was evaluated in the intratumoral area by two independent experts that analyzed the entire slides (×200 magnification) using an immunohistochemical semiquantitative method adapted from previous published study (Ding et al. ). The method final score was achieved by the product of the percentage of positive cells (immunolabelling extension) and staining intensity. The percentage of positive cells was scored as 0 (0% positive cells), 1 (<10% positive cells), 2 (10–50% positive cells), 3 (51–80% positive cells), or 4 (>80%) whereas the staining intensity was scored as 1 (weakly stained), 2 (moderately stained), and 3 (strongly stained). Low TGFβ class was considered if the product of multiplication between staining intensity and the percentage of positive cells was ≤ 6. A final immunohistochemical score > 6 indicates a high TGFβ class. Statistical analysis Statistical analysis was performed using SPSS software version 27.0 (Statistical Package for the Social Sciences, Chicago, IL, USA). Categorical variables were analyzed using the Chi-square test, while continuous variables were assessed through Analysis of Variance (ANOVA) with Tukey’s multiple means comparison. Correlations were evaluated using Pearson’s correlation test for parametric variables and Spearman’s correlation test for nonparametric variables. Survival curves were constructed using the Kaplan–Meier method with mean values as the cutoff, and differences in survival were analyzed using the log-rank test. Multivariate survival analysis was conducted using Cox regression analysis, including all variables simultaneously via the enter method. All tests were assessed at a 95% confidence level ( p < 0.05). Results 3.1. Clinicopathological data Most of the tumors included in this study were histologically classified as tubulopapillary carcinomas ( n = 31). Others include 8 solid carcinomas, 12 complex carcinomas, 3 anaplastic carcinomas and 13 carcinosarcomas. Twenty-eight tumors had lymph node metastasis. Twenty-one tumors presented intravascular neoplastic emboli (31.3%). The HGM was classified as I ( n = 17, 25.4%), II ( n = 18, 26.8%), or III ( n = 32, 47.8%). 3.2. Expression of TGFβ, FoxP3, VEGF and CD31 in malignant CMT Part of FoxP3, VEGF and CD31 cases included in this work were already used in a study published by our team where the staining patterns observed in the samples were already described (Carvalho et al. ). The mean number (±SE) of intratumoral FoxP3 + regulatory T cells was 73.88 ± 6.585 (range 19–267; ). The mean number (±SE) of total neovessels was 39.01 ± 2.562 (range 6–106). The anti-TGFβ antibody had high affinity for tumor epithelial cells. TGFβ immunoexpression was predominantly a diffuse or granular cytoplasmic staining, most evident in the cytoplasm of the ductal epithelium, with prominence of the cytoplasmic membrane . Regarding TGFβ percentage of immunolabelled cells, 10 cases showed extension 1 (<10% positive cells), 18 cases showed extension 2 (10–50% positive cells), 23 cases and 16 cases demonstrated extension 3 (51–80% positive cells) and 4 (>80%) respectively. For TGFβ labelling intensity, there was also a relatively homogeneous distribution between moderate (40.3%, n = 27) and strong labelling (35.8%, n = 24), whereas tumors with weak intensity (23.9%, n = 16) were less frequent. 3.3. Associations of TGFβ immunostaining with clinicopathological features Our analysis has identified a striking association between the presence of aggressive disease and high expression of TGFβ. Tumors with higher levels of TGFβ were associated with skin ulceration ( p = 0.018), tumor necrosis ( p = 0.024), high HGM ( p < 0.001), presence of neoplastic intravascular emboli ( p < 0.001) and presence of lymph node metastasis ( p < 0.001). highlights all the results described above. 3.4. Correlation between TGFβ, FoxP3, VEGF and CD31 immunoexpression The levels of TGFβ were positively correlated with intratumoral FoxP3 ( r = 0.719; p < 0.001), VEGF ( r = 0.378; p = 0.002) and CD31 ( r = 0.511; p < 0.001). In this study three classes were considered: TGFβ/FoxP3, TGFβ/VEGF and TGFβ/CD31. Each class was divided in three categories: 1) low immunoreactivity for both markers; 2) low immunoreactivity for one marker and high for other and 3) high immunoreactivity for both markers. 3.5. Association of TGFβ/VEGF class with intratumoral FoxP3 and MVD in malignant CMT The FoxP3-positive T cells in tumors with concurrent high TGFβ/VEGF immunoexpression ( n = 23; mean 118.26 ± 12.535; range: 32–267) were higher than FoxP3-positive T cells in tumors with low immunoexpression for both markers ( n = 15; mean 41.80 ± 5.446; range: 19–85). The FoxP3 expression was also higher in tumors with high immunoexpression of only one of the markers [tumors TGFβ low/VEGF high ( n = 28) or tumors TGFβ high/VEGF low ( n = 1)] ( n = 29; mean 55.28 ± 6.586; range: 23–186), compared to tumors with low expression for both markers ( p < 0.001; ). Similar results were observed for MVD. Tumors with high TGFβ/VEGF immunoexpression ( n = 23; mean 53.70 ± 2.872; range: 22–89) showed higher values of microvessels compared with tumors with low immunoexpression for both markers ( n = 15; mean 15.40 ± 1.337; range: 6–21). The mean MVD was also higher in tumors with high immunoexpression of only one of the markers [tumors TGFβ low/VEGF high ( n = 28) or tumors TGFβ high/VEGF low ( n = 1)] ( n = 29; mean 39.59 ± 3.705; range: 9–106), compared to tumors with low expression for both markers ( p < 0.001; ). 3.6. Relationship of TGFβ/FoxP3, TGFβ/VEGF and TGFβ/CD31 classes with clinicopathological variables of tumor aggressiveness Tumors with concurrent high expression of TGFβ/FoxP3, TGFβ/VEGF and TGFβ/CD31 markers were associated with parameters of tumor malignancy: high HGM ( p < 0.001 for TGFβ/FoxP3, TGFβ/VEGF and TGFβ/CD31), presence of neoplastic intravascular emboli ( p < 0.001 for TGFβ/FoxP3 and TGFβ/CD31; p = 0.001 for TGFβ/VEGF) and presence of lymph node metastasis ( p < 0.001 for TGFβ/FoxP3, TGFβ/VEGF and TGFβ/CD31). More information is provided in . 3.7. Follow-up study In this study, tumors of histological types carcinosarcoma, anaplastic carcinoma and solid carcinoma ( p = 0.002), larger size ( p = 0.011), presence of tumor necrosis ( p = 0.002), neoplastic intravascular emboli ( p < 0.001), lymph node metastasis ( p < 0.001), high HGM ( p < 0.001) and higher levels of CD31 ( p = 0.001), VEGF ( p = 0.02) and FoxP3 ( p < 0.001), were associated with lower OS time. All these findings are summarized in . Tumors with high TGFβ levels and with concurrent high expression of TGFβ/FoxP3, TGFβ/VEGF and TGFβ/CD31 were associated with shorter OS time ( p < 0.001 for TGFβ, TGFβ/FoxP3, TGFβ/VEGF and TGFβ/CD31 in Kaplan–Meier curves; ; ). The presence of lymph node metastasis retained the association with shorter OS in multivariate Cox regression analysis, arising as an independent predictor of poor prognosis [Hazard ratio (95% CI): 11.033 (1.358–89.653); p = 0.025]. Clinicopathological data Most of the tumors included in this study were histologically classified as tubulopapillary carcinomas ( n = 31). Others include 8 solid carcinomas, 12 complex carcinomas, 3 anaplastic carcinomas and 13 carcinosarcomas. Twenty-eight tumors had lymph node metastasis. Twenty-one tumors presented intravascular neoplastic emboli (31.3%). The HGM was classified as I ( n = 17, 25.4%), II ( n = 18, 26.8%), or III ( n = 32, 47.8%). Expression of TGFβ, FoxP3, VEGF and CD31 in malignant CMT Part of FoxP3, VEGF and CD31 cases included in this work were already used in a study published by our team where the staining patterns observed in the samples were already described (Carvalho et al. ). The mean number (±SE) of intratumoral FoxP3 + regulatory T cells was 73.88 ± 6.585 (range 19–267; ). The mean number (±SE) of total neovessels was 39.01 ± 2.562 (range 6–106). The anti-TGFβ antibody had high affinity for tumor epithelial cells. TGFβ immunoexpression was predominantly a diffuse or granular cytoplasmic staining, most evident in the cytoplasm of the ductal epithelium, with prominence of the cytoplasmic membrane . Regarding TGFβ percentage of immunolabelled cells, 10 cases showed extension 1 (<10% positive cells), 18 cases showed extension 2 (10–50% positive cells), 23 cases and 16 cases demonstrated extension 3 (51–80% positive cells) and 4 (>80%) respectively. For TGFβ labelling intensity, there was also a relatively homogeneous distribution between moderate (40.3%, n = 27) and strong labelling (35.8%, n = 24), whereas tumors with weak intensity (23.9%, n = 16) were less frequent. Associations of TGFβ immunostaining with clinicopathological features Our analysis has identified a striking association between the presence of aggressive disease and high expression of TGFβ. Tumors with higher levels of TGFβ were associated with skin ulceration ( p = 0.018), tumor necrosis ( p = 0.024), high HGM ( p < 0.001), presence of neoplastic intravascular emboli ( p < 0.001) and presence of lymph node metastasis ( p < 0.001). highlights all the results described above. Correlation between TGFβ, FoxP3, VEGF and CD31 immunoexpression The levels of TGFβ were positively correlated with intratumoral FoxP3 ( r = 0.719; p < 0.001), VEGF ( r = 0.378; p = 0.002) and CD31 ( r = 0.511; p < 0.001). In this study three classes were considered: TGFβ/FoxP3, TGFβ/VEGF and TGFβ/CD31. Each class was divided in three categories: 1) low immunoreactivity for both markers; 2) low immunoreactivity for one marker and high for other and 3) high immunoreactivity for both markers. Association of TGFβ/VEGF class with intratumoral FoxP3 and MVD in malignant CMT The FoxP3-positive T cells in tumors with concurrent high TGFβ/VEGF immunoexpression ( n = 23; mean 118.26 ± 12.535; range: 32–267) were higher than FoxP3-positive T cells in tumors with low immunoexpression for both markers ( n = 15; mean 41.80 ± 5.446; range: 19–85). The FoxP3 expression was also higher in tumors with high immunoexpression of only one of the markers [tumors TGFβ low/VEGF high ( n = 28) or tumors TGFβ high/VEGF low ( n = 1)] ( n = 29; mean 55.28 ± 6.586; range: 23–186), compared to tumors with low expression for both markers ( p < 0.001; ). Similar results were observed for MVD. Tumors with high TGFβ/VEGF immunoexpression ( n = 23; mean 53.70 ± 2.872; range: 22–89) showed higher values of microvessels compared with tumors with low immunoexpression for both markers ( n = 15; mean 15.40 ± 1.337; range: 6–21). The mean MVD was also higher in tumors with high immunoexpression of only one of the markers [tumors TGFβ low/VEGF high ( n = 28) or tumors TGFβ high/VEGF low ( n = 1)] ( n = 29; mean 39.59 ± 3.705; range: 9–106), compared to tumors with low expression for both markers ( p < 0.001; ). Relationship of TGFβ/FoxP3, TGFβ/VEGF and TGFβ/CD31 classes with clinicopathological variables of tumor aggressiveness Tumors with concurrent high expression of TGFβ/FoxP3, TGFβ/VEGF and TGFβ/CD31 markers were associated with parameters of tumor malignancy: high HGM ( p < 0.001 for TGFβ/FoxP3, TGFβ/VEGF and TGFβ/CD31), presence of neoplastic intravascular emboli ( p < 0.001 for TGFβ/FoxP3 and TGFβ/CD31; p = 0.001 for TGFβ/VEGF) and presence of lymph node metastasis ( p < 0.001 for TGFβ/FoxP3, TGFβ/VEGF and TGFβ/CD31). More information is provided in . Follow-up study In this study, tumors of histological types carcinosarcoma, anaplastic carcinoma and solid carcinoma ( p = 0.002), larger size ( p = 0.011), presence of tumor necrosis ( p = 0.002), neoplastic intravascular emboli ( p < 0.001), lymph node metastasis ( p < 0.001), high HGM ( p < 0.001) and higher levels of CD31 ( p = 0.001), VEGF ( p = 0.02) and FoxP3 ( p < 0.001), were associated with lower OS time. All these findings are summarized in . Tumors with high TGFβ levels and with concurrent high expression of TGFβ/FoxP3, TGFβ/VEGF and TGFβ/CD31 were associated with shorter OS time ( p < 0.001 for TGFβ, TGFβ/FoxP3, TGFβ/VEGF and TGFβ/CD31 in Kaplan–Meier curves; ; ). The presence of lymph node metastasis retained the association with shorter OS in multivariate Cox regression analysis, arising as an independent predictor of poor prognosis [Hazard ratio (95% CI): 11.033 (1.358–89.653); p = 0.025]. Discussion This study primarily explored the immunoexpression of TGFβ, FoxP3, VEGF, and CD31 in malignant CMT and their associations with tumor clinicopathological features. We found that TGFβ immunoexpression was associated with aggressive tumor characteristics such as skin ulceration, tumor necrosis, higher HGM, neoplastic intravascular emboli, and lymph node metastasis. Additionally, a positive correlation was observed between TGFβ, FoxP3, VEGF, and CD31. TGFβ demonstrates a dual role in malignant tumor development process. During the early stages of carcinogenesis, TGFβ acts as a tumor suppressor, regulating negatively cellular proliferation. However, with the development of malignant tumor, the TGFβ role changes toward a tumor promoter, mediating tumor cells proliferation, migration and invasion (Dumont and Arteaga ; Moses and Barcellos-Hoff ; Ding et al. ; Colak and Ten Dijke ). Findings suggest that the dysregulation of TGFβ pathways in tumors induce signal reprogramming, allowing cancer cells to mimic normal functions to guarantee their subsistence. In fact, recent studies have demonstrated that high levels of TGFβ expression have a close association with several human malignancies (Coban et al. ; Minamiya et al. ; Stojnev et al. ; Perez et al. ; Torrealba et al. ), including breast cancer (Bao et al. ; Lang et al. ; Juang et al. ; Huang et al. ; Niu et al. ). In human breast cancer high levels of TGFβ are observed in advanced carcinomas, and have been correlated with disease progression and worse clinical outcomes (Gorsch et al. ; Buck et al. ; Bao et al. ; Juang et al. ; Huang et al. ). TGFβ produced by tumor cells may act in a paracrine mode on tumor stromal cells, tumor neovessels and immune cells, contributing to tumor immunosuppression, angiogenesis and progression (Dumont and Arteaga ; Lang et al. ; Niu et al. ). In veterinary literature, to the best of our knowledge, the prognostic value and the role that TGFβ may have on CMT immunosuppression and angiogenesis were not investigated yet. The findings of our work are in accordance with recent literature in human breast cancer (Gorsch et al. ; Bao et al. ; Lang et al. ; Ding et al. ; Juang et al. ) and suggests a link between TGFβ and more aggressive tumor phenotypes, reflecting its involvement in CMT malignant transformation. In veterinary field, one study demonstrated using a CMT cell line that TGFβ prompt an induction of the mesenchymal marker vimentin, increasing the invasiveness capacity of tumor cells, a crucial step in metastasis formation. Interestingly, this induction is reversed in a phenomenon similar to the mesenchymal–epithelial transition (the reverse phenomenon of epithelial–mesenchymal transition) after prolonged stimulation with TGFβ. This is a beneficial effect for the formation of new tumor masses at the side of metastatic lesions (Yoshida et al. ). Another study also showed a higher immunohistochemical expression of matrix metalloproteinase‐9 (MMP-9) and TGFβ in malignant CMT in comparison with benign ones. Additionally, in vitro activation of TGFβ/SMAD pathways induced an overexpression of MMP‐9 in the breast cancer cell lines and an increase in breast cells malignancy (Dong et al. ). These results corroborate our work and the lack of additional studies in CMT hampers more concise comparisons. Our data demonstrated also that TGFβ levels showed a strong positive correlation with intratumoral FoxP3, VEGF, and CD31 levels. Concordantly with our findings, in human breast cancer TGFβ has an important role on tumor microenvironment switch, promoting increased angiogenic activity and suppressed immune surveillance, contributing to tumor development, progression and poor clinical outcome (Donovan et al. ; Gupta et al. ; Petersen et al. ; Ding et al. ; Juang et al. ). The TGFβ in breast tumor sites acts as an important immunosuppressant repressing effector T cells anti-tumor activity (Padua and Massagué ; Stüber et al. ; Lainé et al. ). Additionally, TGFβ signaling in T cells participates in the expression and the stabilization of transcription factor FoxP3. The increasingly high concentrations of TGFβ secreted by tumor cells induce FoxP3 expression in peripheral CD4 + CD25 – T cells and their precursors, rendering them inactive (Chen and Konkel ; Principe et al. ). This occurrence is clinically relevant since the enrichment of CD4 + CD25 + FoxP3 + Treg cells in human mammary tumors is associated with poor prognosis (Gupta et al. ; Kajal et al. ; Lainé et al. ). Moreover, Treg cells increase the TGFβ effects creating a positive auto-regulatory loop of TGFβ signaling in CD4 + CD25 – T cells that possibly stabilizes their regulatory phenotype (Fantini et al. ). FoxP3 Treg cells, in this process, needs greater attention, not only for being an important source of TGFβ but also for directly instructing cancer cells by secreting TGFβ (Jensen-Jarolim et al. ). In humans the TGFβ and FoxP3 common signaling pathways have a crucial impact in several phases of mammary carcinogenesis, including tumor angiogenic switch (Gupta et al. ; Padua and Massagué ; Chen and Konkel ; Kajal et al. ; Lainé et al. ). Equally to our results, data in human breast cancer demonstrated that intratumoral FoxP3 was correlated with levels of TGFβ, VEGF and tumor microvessel density (Gupta et al. ; Lainé et al. ). TGFβ and FoxP3 are reported to regulate tumor new blood vessels formation by a combination of responses that increase the production of VEGF (Donovan et al. ; Gupta et al. ; Petersen et al. ; Kajal et al. ). In dog mammary tumors it was demonstrated that Treg cells may contribute to increased angiogenesis (Carvalho et al. ). Another study showed that FoxP3 + CD4 + T cells in dogs could be expanded in vitro after the addition of TGFβ and IL-2 and by tumor cell receptor activation (Biller et al. ). However, to the best of our knowledge, this is the first study that demonstrate the prognostic value of TGFβ. Interestingly our results suggest that in CMT may exist an autocrine/paracrine TGFβ/FoxP3 signaling loop. TGFβ and Treg cells common pathways provides the tumor with a mechanism that facilitate evasion of immune surveillance and prompt the VEGF-dependent angiogenesis, contributing to CMT progression and aggression. Conclusion In our study, tumors with concurrent high expression of TGFβ with FoxP3, VEGF, or CD31 were significantly associated with clinicopathologic factors typically related to clinical aggressiveness (high HGM, presence of vascular emboli and nodal metastasis), and linked to shorter OS times. Despite these relevant associations between the prognosis and immunologic and angiogenic markers, the lymph node metastasis was the single independent predictor of poor prognosis in this case series of CMTs. |
Preventable fatal injury during rally race: a multidisciplinary approach | 8d0f88bb-0290-41c3-81fe-bd06bf4a11f8 | 8036227 | Pathology[mh] | The motor vehicle crash (MVC) constitutes an important challenge for forensic pathology, especially in recent years. Our study focuses on a fatal accident during a rally race; therefore, it corresponds to MVC sub-category. Rally is a motorsport discipline taking place on public, asphalt or dirt, road. Modern rally competitions are developed since the beginning of the twentieth century in Europe. The “Mille Miglia” race, the most prestigious and ancient race, can be considered an ancestor of this motorsport discipline. The concept of the rally was vague, and there are no official regulations until the first half of the 1960s. Rally races became competitions with official regulations around the first half of the 1970s. During a rally race, pilots may drive only a series of cars. Rally competition divides into two types of stages: special stages and transport stages. The last one consists of a route marked out on Radar or Road Book that must be performed within a specific time limit. In these stages, penalties are applied for being completed either too fast or too slowly. Special stages are tests of skill clockwork, in which pilots drive on winding and rough roads, in the absence of road safety equipment. Rally races can be considered regularity races during transport stages, in which pilots must comply with road traffic and timelines regulations, and time trial during special stages. Epidemiological data (Table , Fig. ) show an increase in fatal MVC from the 1980s. It could be a consequence of the increasing speed of rally cars. The marked progress in polytraumatized patients’ therapy, in the medical field, and occupant’s safety system, in the engineering field, on the other hand, decrease the rally mortality rate in the last decade. The safety cycle, a security system for road accidents, is, in fact, fundamental for the development of a specific prevention system. It includes several surveillance mechanisms, biomechanical studies carried out through crash-tests, and analysis of epidemiological data on vehicles, drivers, and places in the MVC. In the illustrated case, a car accident occurred during “Targa Florio” rally, an ancient Sicilian car racing competition that usually takes place in the mountain range of Palermo province in May. Vincenzo Florio, a citizen of Palermo’s wealthy family, created, financed, and organized “Targa Florio” race. It was raced 61 times, with no solution of continuity, from 1906 to 1977. It was turned into a rally race for safety reasons in 1978, remaining one of Italian and European Rally Championship stages. The highest mortality rate was recorded in the decade 1970–1980 in “Targa Florio” rally race (Table ). In the case of MVC, it is necessary to check whether injuries, resulting from impact, are enough to cause death, or it is a necessary but not sufficient condition. In this last case, the pre-existing diseases could induce an abnormal response to trauma. It is, therefore, important to exclude that MVC is a consequence of an acute pathological event, prior to the incident. Autopsy remains one of the main data sources for fatal crashes; it is fundamental to answer car accident questions regarding the kind and cause of death. Autopsy has recently been integrated with a radiological investigation, which is necessary to accurately define injuries before the autopsy and to guide medical examiner during the one. There has been an increased use of Postmortem Computed Tomography (PMCT) in the forensic field recently. The postmortem examination remains, however, the gold standard. PMCT allows the dynamic reconstruction of the MVC in a noninvasive way. Three-dimensional volume rendering (3DVR) imaging in radiological applications allows to obtain three-dimensional reconstructions of the whole body . Medico-legal and postmortem radiological synergic investigates injuries more thoroughly, improving prevention and safety system. PMCT advantages, such as short execution time or objective and reproducible medical records, full depiction of fractures and lesions hardly detectable at the autoptic examination, make it ideal for synergy and integration with medico-legal investigation. The illustrated case concerns a fatal accident that occurred during a rally race. The pilot was driving his rally car on a straight road after a curve when he lost control of the vehicle, maybe because of wet asphalt. The vehicle went off the road, running over a referee, and crashing into a tree. The pilot and referee have died on impact; the copilot survived. The pilot’s body was transferred to a University Hospital Morgue, and it underwent a medico-legal examination. The investigation was supplemented by a preliminary PMCT 18 h after death. All radiologic scans were performed by two forensic experienced board-certified radiologists. Photographic survey of places, collected by judicial police and videos of first help, have been provided to analyze it and better understand the dynamics of the incident. PMCT The postmortem interval between PMCT and the medico-legal autopsy was about 6 h. We performed a non-contrast whole/body scan of the victim enveloped in a bag before a conventional autopsy. PMCT was performed with a 128 slices MDCT scanner (Somatom Definition AS®, Siemens Healthcare Erlangen Germany) using: Tube voltage of 120 kVp, with an effective tube current of 120–160 effective mAs; Gantry rotation time of 0.5 s, beam pitch of 1.2, and table speed of 46 mm per gantry rotation; Overlapped slices with a thickness of 0.6 mm [espr guidelines]. Images were reviewed using our institutional PACS viewer ( Elephant.net suite® AGFA Health Care N.V., Belgium) and dedicated workstations (Singovia® Siemens Healthcare Erlangen Germany; Horos Project, Pixmeo). Bone imaging algorithms, soft-tissue algorithms for the whole-body examination, dedicated head and lung algorithms for the brain, and pulmonary evaluation respectively were used. Due to the impossibility of acquiring the whole body in a single scan, five scans of the body (the head and body trunk, the arms, and the legs) were repeated after repositioning. Images are then evaluated using multiplanar reformatting (MPR) in coronal and sagittal planes and volume-rendering (VR) elaboration with a dedicated bone lung program. Autopsy and biomechanical injuries analysis Autopsy is based on the collection and analysis of postmortem data. We took different organs’ samples after the autopsy. The samples were stored in 10% neutral buffered formalin. Hematoxylin-eosin-stained tissue sections were analyzed with an optic microscope with 4-10-40-100 zoom. We carried out an analysis of drugs and toxic substances in biological fluids. The assessment of post-traumatic injuries found through PMCT and autopsy is based on the evaluation of several quantitative and qualitative indices. Qualitative-statistics (AIS, MAIS, IIS) and solicitation quantitative (HIC, NIC, TBI) indices and their respective tolerance threshold may be taken into consideration in the medico-legal field , not only for MCV-related deaths but also for other traumatic deaths, such as those caused by work-related accidents, sporting accidents, explosions, mass disasters . The Abbreviated Injury Scale (AIS) is an anatomical evaluation system, based on the classification of each injury depending on their severity and location, through a scale from one to six (Table , ). The number “one” corresponds to mild lesions, the number “six” to fatal injuries. The injuries can be placed on nine different anatomical areas regarding anatomical criteria. AIS represents the life-threatening secondary to each injury; it does not provide a complete indication of the overall clinical picture of the patient. We have classified the injuries, depending on severity, location, and involved biological tissue (whole body surface, nerves, vessels, bones). Finally, we compared injuries found at PMCT and autopsy. The postmortem interval between PMCT and the medico-legal autopsy was about 6 h. We performed a non-contrast whole/body scan of the victim enveloped in a bag before a conventional autopsy. PMCT was performed with a 128 slices MDCT scanner (Somatom Definition AS®, Siemens Healthcare Erlangen Germany) using: Tube voltage of 120 kVp, with an effective tube current of 120–160 effective mAs; Gantry rotation time of 0.5 s, beam pitch of 1.2, and table speed of 46 mm per gantry rotation; Overlapped slices with a thickness of 0.6 mm [espr guidelines]. Images were reviewed using our institutional PACS viewer ( Elephant.net suite® AGFA Health Care N.V., Belgium) and dedicated workstations (Singovia® Siemens Healthcare Erlangen Germany; Horos Project, Pixmeo). Bone imaging algorithms, soft-tissue algorithms for the whole-body examination, dedicated head and lung algorithms for the brain, and pulmonary evaluation respectively were used. Due to the impossibility of acquiring the whole body in a single scan, five scans of the body (the head and body trunk, the arms, and the legs) were repeated after repositioning. Images are then evaluated using multiplanar reformatting (MPR) in coronal and sagittal planes and volume-rendering (VR) elaboration with a dedicated bone lung program. Autopsy is based on the collection and analysis of postmortem data. We took different organs’ samples after the autopsy. The samples were stored in 10% neutral buffered formalin. Hematoxylin-eosin-stained tissue sections were analyzed with an optic microscope with 4-10-40-100 zoom. We carried out an analysis of drugs and toxic substances in biological fluids. The assessment of post-traumatic injuries found through PMCT and autopsy is based on the evaluation of several quantitative and qualitative indices. Qualitative-statistics (AIS, MAIS, IIS) and solicitation quantitative (HIC, NIC, TBI) indices and their respective tolerance threshold may be taken into consideration in the medico-legal field , not only for MCV-related deaths but also for other traumatic deaths, such as those caused by work-related accidents, sporting accidents, explosions, mass disasters . The Abbreviated Injury Scale (AIS) is an anatomical evaluation system, based on the classification of each injury depending on their severity and location, through a scale from one to six (Table , ). The number “one” corresponds to mild lesions, the number “six” to fatal injuries. The injuries can be placed on nine different anatomical areas regarding anatomical criteria. AIS represents the life-threatening secondary to each injury; it does not provide a complete indication of the overall clinical picture of the patient. We have classified the injuries, depending on severity, location, and involved biological tissue (whole body surface, nerves, vessels, bones). Finally, we compared injuries found at PMCT and autopsy. Findings of analysis of photographic survey and circumstantial evidence Photo and circumstantial evidence analysis showed a wrong installation of a double shoulder belt system of the head and neck support (HANS) collar, an important safety device for a helmet. The pilot did not properly wear HANS-belts upon HANS-yoke and did not cross them. He wore body belts correctly, as confirmed by engineering expertise. PMCT findings PMCT clearly showed a huge mastoid and basic of skull fracture (Fig. ), mainly in the right side of the basis of skull and extended to the left parietal bone, fracture of the right side of the atlas. Signs of pneumocephalus, ventricular hemorrhages, and small subarachnoid hemorrhages were also highlighted. We also found an acetabular fracture. A displaced fracture of the right acetabulum (Fig. ) was also encountered. Autopsy findings The body was 178 cm in length, weighing 90 Kg. At the external examination, bilateral otorrhagia, abrasions, and bruises on the right side of the lateral cervical region and acromioclavicular area were found. Skull section showed hemorrhagic infiltration of the inner scalp surface, galea capitis, periosteum in the left temporal-parietal-occipital portion of the skull; a skull fracture extended from the temporal bone to the occipital bone, involving also left parietal bone (Fig. , ). Subarachnoid hemorrhage in the left temporal-parietal portion of the brain and cerebellum, leptomeningeal congestion (Fig. ), and a laceration in the forepart of the brain stem (Fig. ), involving both cerebral peduncles, were highlighted. Basicranial fractures as a line with horizontal extension in the middle cranial fossa, also involving sella turcica, great wings of the sphenoid, bilaterally, and squamous part of the temporal bone, were also found; a fracture with horizontal extension in the left posterior cranial fossa, involving the squamous part of occipital bone; a fracture line in the right side of the foramen magnum (ring fractures). Sternoclavicular dislocation, pulmonary emphysema, contusions, and perirenal bleeding were highlighted through thorax and abdomen examination. Pelvis section confirmed acetabular fracture. Autopsy did not reveal other concomitant acute pathological events. Histological and toxicological findings The histological slides reading showed parenchymal necrosis in the right parietal lobe, vascular congestion in the occipital lobe, perivascular edema in the cerebral membranes, hemorrhage in the cerebellar membranes, neuronal degeneration, edema, and subpial and intraparenchymal hemorrhage in the brain stem were highlighted (Fig. ). Non-obstructive sclerosis of the left coronary artery was found. Focal bronchoalveolar hemorrhage and atelectasis, widespread emphysema was also highlighted. Perirenal hemorrhage was confirmed. Neither drugs nor ethanol was detected by screening toxicological analysis. Injuries classification and analysis Injuries analysis (Table ) showed five types of head injuries: cranic fractures with AIS value equal to 3; subarachnoid hemorrhage with AIS value equal to 6; cerebrum pneumocephalus with AIS value equal to 4; cerebellar hemorrhage with AIS value equal to 6; and brainstem laceration with AIS value equal to 6. One type of vertebral injury: C1 fracture with AIS value equal to 6. Two types of thoracic injuries: pulmonary emphysema and contusions with AIS value equal to 4; sternoclavicular dislocation with AIS value equal to 2. One type of abdominal injury: perirenal bleeding with AIS value equal to 3. All injuries were found through both diagnostic methods (PMCT, autopsy), except cerebrum pneumocephalus, clearly identified during PMCT and brainstem lacerations better appreciable at autopsy. Photo and circumstantial evidence analysis showed a wrong installation of a double shoulder belt system of the head and neck support (HANS) collar, an important safety device for a helmet. The pilot did not properly wear HANS-belts upon HANS-yoke and did not cross them. He wore body belts correctly, as confirmed by engineering expertise. PMCT clearly showed a huge mastoid and basic of skull fracture (Fig. ), mainly in the right side of the basis of skull and extended to the left parietal bone, fracture of the right side of the atlas. Signs of pneumocephalus, ventricular hemorrhages, and small subarachnoid hemorrhages were also highlighted. We also found an acetabular fracture. A displaced fracture of the right acetabulum (Fig. ) was also encountered. The body was 178 cm in length, weighing 90 Kg. At the external examination, bilateral otorrhagia, abrasions, and bruises on the right side of the lateral cervical region and acromioclavicular area were found. Skull section showed hemorrhagic infiltration of the inner scalp surface, galea capitis, periosteum in the left temporal-parietal-occipital portion of the skull; a skull fracture extended from the temporal bone to the occipital bone, involving also left parietal bone (Fig. , ). Subarachnoid hemorrhage in the left temporal-parietal portion of the brain and cerebellum, leptomeningeal congestion (Fig. ), and a laceration in the forepart of the brain stem (Fig. ), involving both cerebral peduncles, were highlighted. Basicranial fractures as a line with horizontal extension in the middle cranial fossa, also involving sella turcica, great wings of the sphenoid, bilaterally, and squamous part of the temporal bone, were also found; a fracture with horizontal extension in the left posterior cranial fossa, involving the squamous part of occipital bone; a fracture line in the right side of the foramen magnum (ring fractures). Sternoclavicular dislocation, pulmonary emphysema, contusions, and perirenal bleeding were highlighted through thorax and abdomen examination. Pelvis section confirmed acetabular fracture. Autopsy did not reveal other concomitant acute pathological events. The histological slides reading showed parenchymal necrosis in the right parietal lobe, vascular congestion in the occipital lobe, perivascular edema in the cerebral membranes, hemorrhage in the cerebellar membranes, neuronal degeneration, edema, and subpial and intraparenchymal hemorrhage in the brain stem were highlighted (Fig. ). Non-obstructive sclerosis of the left coronary artery was found. Focal bronchoalveolar hemorrhage and atelectasis, widespread emphysema was also highlighted. Perirenal hemorrhage was confirmed. Neither drugs nor ethanol was detected by screening toxicological analysis. Injuries analysis (Table ) showed five types of head injuries: cranic fractures with AIS value equal to 3; subarachnoid hemorrhage with AIS value equal to 6; cerebrum pneumocephalus with AIS value equal to 4; cerebellar hemorrhage with AIS value equal to 6; and brainstem laceration with AIS value equal to 6. One type of vertebral injury: C1 fracture with AIS value equal to 6. Two types of thoracic injuries: pulmonary emphysema and contusions with AIS value equal to 4; sternoclavicular dislocation with AIS value equal to 2. One type of abdominal injury: perirenal bleeding with AIS value equal to 3. All injuries were found through both diagnostic methods (PMCT, autopsy), except cerebrum pneumocephalus, clearly identified during PMCT and brainstem lacerations better appreciable at autopsy. The injuries analysis during an autopsy, supplemented with PMCT and photographic surveys examination, has allowed us to carry a biomechanical analysis of the incident. The important damage to the front of the car (bumper and right-front tire), observed through photographic survey’s examination, has led us to confirm a front collision with a tree trunk. We also assume a high kinetic energy impact because of the considerable car damage. Front collisions are extremely dangerous and represent 50-55% of major or fatal MVC. The vehicle occupants may be exposed to considerably high stresses (accelerations or decelerations), caused by the impact, during car crashes; they may crash into vehicle interior structures because of inertial motion, and their body segments may be decelerated. The magnitude of the deceleration vector reaches the peak in milliseconds, and then it decreases to around zero. Head trauma, with basicranial fractures and brainstem laceration, represents the major injury in the present case. Skull base involvement is an expression of high kinetic energy. Head directed flexion is one of the mechanisms of ring fractures at the skull base in occipital bone traumas; shearing effect occurs in facial and occipital bone traumas. The support system for the head is composed of cervical vertebrae, acting as a head pivot, and neck muscle-tendinous structures, connecting the head to the pivot. Cervical vertebrae constitute the focus where the reaction force, opposed to damaging force, originates. Damaging forces deform the skull massively, shortening the diameter between the point of damaging force application and vertebral column portion, in which reaction force originates. Skull fractures may occur following the violent head hyperextension or hyperflexion. Ring fractures at the skull base are the most common skull fractures in case of high energy MVC ; they can be complete or incomplete. The incomplete ring fractures extend along the middle cranial fossa and behind the petrous pyramid of the temporal bone, bilaterally . The inertial force, related to the acceleration due to blunt head trauma, leads a brain shift in the braincase, resulting in intraparenchymal contusions. Compressive tension and pulling forces can cause potential injuries. Combined dynamic forces come into play in head trauma more frequently; these forces are capable of producing a brain shift through two components: translation (linear motion) and rotation (angular acceleration). Traumatic brain deceleration forces could cause a diffuse axonal injury (DAI) , characterized by alteration of axonal cytoskeleton, axonal transport disruption, and axonal microtubules misalignment. These changes induce a β-amyloid precursor protein (β-APP) accumulation in damaged axons in a time-dependent manner. Immunohistochemical techniques for β-APP are, in fact, performed to investigate DAI, showing high sensitivity for traumatic axonal injuries and providing additional information about survival time and degree of mechanical forces . In this case, the brainstem, one of the vital organs, has been injured; there is a clear evidence of a pontomedullary laceration. A brainstem laceration induces an interruption of nerve conduction in the central nervous system, with immediate cardiopulmonary arrest and instant death. Cervical Whiplash trauma is able to produce brainstem and cervical lacerations , secondary to head and neck hyperextension-hyperflexion, due to a sudden acceleration-deceleration force. It can also cause neck injuries, such as ligaments and joint capsules lacerations, physiological lordosis alterations, and nerve damages. The drivers may have hinge fracture of cranial base if their head makes lateral movements during whiplash, ring fracture is a basicranial injury, with a fracture line running from side to side across the middle cranial cavities, separating the base into two halves, anterior and posterior. Head hyperextension causes injury of cervical spinal anterior longitudinal ligament and front of neck soft-tissues; the abrupt head flexion damages the back of the neck ligaments and muscles, such as sternocleidomastoid or scalenus muscle. In this case, head injuries, related to whiplash, are a consequence of a double shoulder belt system (HANS Collar component) wrong installation. The pilot did not properly wear HANS-belts upon HANS-yoke and did not cross them as confirmed by engineering expertise. He wore body belts correctly. To address this issue, it is necessary to premise that the double shoulder belt system allows a decrease in impact force and trunk movements. Head and neck pilots, therefore, are very vulnerable in case of impact, especially when the crash includes sudden movements on the frontal and transverse plane (anterior-posterior translation and flexion-extension movement). HANS collar was progressively introduced for pilot safety. In fact, it serves as a head and neck support; it allows to resist flexion, distraction, and deceleration movements, diverting translation head movement toward trunk . The device is made up of different components (Fig. ), each one with a different task: Safety-belt attached to the helmet: it links helmet to the device, allowing to a transmission of forces from head to the device during an impact; Helmet anchor: it needs to secure safety-belt to helmet allowing loads transmission through HANS collar; HANS collar: it transmits loads from anchor system to HANS-yoke; Safety belts anchoring system to the collar: it connects safety-belts with collar, consenting loads transmission from head to shoulders through the yoke; Yoke: it is situated on shoulder and thorax of the pilot, for load transmission to the trunk; The interface system with safety belts: it is the upper part of the yoke. It interfaces with safety belts, and it transfers forces from the head to trunk. Literary studies have highlighted that the use of this device significantly decreases tension and shear neck force, injuries secondary to flexion and distraction movements, and, therefore, head-spine traumas. HANS support device ensures greater pilot safety in car racing, decreasing in the sequels, secondary to MVC . Several studies have shown that Hans collar allows a decrease in head motions and damaging force acting on the neck during frontal crashes, reducing pilot basicranial fractures and head impact on vehicle interior rigid structures . In conclusion, HANS device allows transmitting forces from head-neck to trunk . MVC and especially high-speed motor racing’s injuries represent an important death cause. There was, for this reason, a marked development of cars and occupants’ safety systems, such as HANS collar. It restricts head and neck movements, allowing a decrease in traumatic craniocerebral injuries. PMCT examination is really useful in the depiction of cranial fractures, allows a full depiction of fractures of the basis of the skull and hemorrhagic lesions, otherwise hardly detectable at the autoptic examination. The use of radiological diagnosis helps in the depiction of lesions, fasting the autoptic one, and improving the comprehension of death causes. Autopsy remains the gold standard, allowing analyzing injuries and excluding other death causes, so it would be totally wrong to claim that PMCT can replace autopsy. The combination of both diagnostic methods, however, is an advantage, especially in the case of multiple traumas secondary to an incident. |
Exploring animal food microbiomes and resistomes via 16S rRNA gene amplicon sequencing and shotgun metagenomics | ee6d3af3-9238-4f34-9513-149e4fe85867 | 11837513 | Microbiology[mh] | The United States is one of the major animal food producers in the world, with 238.1 million metric tons produced in 2023 and over $267.1 billion contributed to the U.S. economy . Encompassing pet food, animal feed, and raw materials and ingredients , this diverse and complex food matrix can be further divided into feed materials/ingredients, feed additives, complete feed (including pet food), and medicated feed . A wide range of raw materials and ingredients are used to manufacture animal food, including plant-based materials (e.g., grains and oilseed meals), animal-based materials (e.g., fish meals and meat and bone meals), and feedstuffs of other origins (e.g., vitamins, minerals, amino acids, and stabilizers) . In 2023, the global animal food production by sector was as follows: broiler 28.9%, pig 25.2%, layer 13.3%, dairy 10.0%, beef 9.5%, aquaculture 4.2%, pet 2.7%, equine 0.6%, and others 5.6%, with predominant growths in the boiler feed and pet food sectors . Animal food is prone to microbial contamination and may harbor zoonotic pathogens such as Salmonella enterica and commensals such as Escherichia coli and Enterococcus spp. . Efforts to isolate and identify other foodborne pathogens or commensals in animal food have only had limited success . Although at low frequencies, antimicrobial resistance (AMR) has been observed among pathogenic and commensal bacteria recovered from animal food . Nonetheless, the comprehensive microbiota (microbiomes) and repertoire of AMR genes (resistomes) in animal food remain poorly characterized and primarily rely on culture-dependent methods that can fail to reveal the true genetic diversity of the community. Recent years have seen significant technological advancements and cost reductions in the field of next-generation sequencing . As such, metagenomic sequencing has been used extensively to profile diverse microbial communities associated with samples from humans, animals, foods, and the environment . The most common approaches for microbiome characterization are targeted amplicon sequencing of select markers, such as the 16S rRNA gene, and whole metagenome shotgun sequencing of the entire community en masse . 16S rRNA gene amplicon sequencing provides an affordable means to generate genus-level microbial community profiles, but primer bias and chimera formation may be introduced . It also provides no insight into the functional capacity of the microbial community. Conversely, shotgun metagenomics can identify various genetic determinants associated with functionality (e.g., AMR genes and virulence factors) with high resolution, but taxonomic classification of all sequencing reads may be challenging as available reference databases still need much improvement . Despite the growing interest and application of metagenomics in understanding the structure/composition and function of diverse microbial communities along the One Health continuum, there is a scarcity of studies using these advanced sequencing technologies to characterize the microbiomes and resistomes in animal food. Similar to human food, animal food constitutes challenging matrices for metagenomic analysis due to variable microbial loads of pathogens and commensals, physicochemical properties that may inhibit DNA extraction and amplification, and high proportions of matrix DNA from plant and animal materials . When developing metagenomic workflows for animal food, consideration should be given to core methodologies such as sample preparation, DNA extraction, 16S rRNA gene target regions and amplification protocols, library preparation, multiplexing strategies, bioinformatic tools and algorithms, and reference databases . This study aimed to gain insights into the microbial community and AMR gene profiles of three types of animal food (cattle feed, dry dog food, and poultry feed) by culture-independent 16S rRNA gene amplicon sequencing and shotgun metagenomics. We first used the ZymoBIOMICS mock microbial community (Zymo Research, Irvin, CA) for initial workflow optimization . This optimized workflow was then used to perform two trials in the three types of animal food using entirely different sample sets with replicates. In trial 1, we evaluated the effect of DNA extraction kit and two strategies for removing chloroplast and mitochondria read from the 16S rRNA gene amplicon sequencing data set. In trial 2, we profiled the animal food microbiomes by both 16S rRNA gene amplicon sequencing and shotgun metagenomics and examined resistomes derived from the latter. We present here our exploratory work profiling animal food microbiomes and resistomes using both sequencing approaches. Animal food samples Two trials were performed using different sets of bulk animal food samples (18–23 kg) obtained from a local animal food store. These included cattle feed (general-purpose ration for growing and mature beef cattle), dry dog food (complete dog food for all life stages), and poultry feed (complete, general-purpose poultry maintenance feed). From each bulk product, ten 1 kg subsamples were randomly collected and stored at 4°C until analysis. On the day of analysis, a 100 g composite sample (400 g for dry dog food due to low genomic DNA yield) was formed by combining equal amounts from the 10 subsamples. In trial 1, triplicate 25 g test portions (100 g for dry dog food) of the 100 g composites were aseptically weighed out in Whirl-Pak bags and suspended at a 1:9 ratio in modified buffered peptone water (3M Food Safety, St. Paul, MN). The mixtures were hand-massaged for 5 min (dry dog food was homogenized in a stomacher [Seward, West Sussex, UK] at 230 rpm for 2 min). Four sets (one for each DNA extraction kit) of 25 mL rinsates (210 mL for dry dog food) were transferred to 50 mL Falcon tubes and centrifuged at 900 × g for 3 min to remove animal food particles. The supernatants were transferred to new tubes and centrifuged at 10,000 × g for 20 min at 8°C. The resulting pellets (12 per sample type; 36 total) were stored at −20°C for DNA extraction by four kits. In trial 2, duplicate 25 g test portions were analyzed by two analysts independently, resulting in a total of 12 pellets for DNA extraction by one kit. The composite samples were also tested for total aerobic plate counts (APC) by the standard pour plate method and screened for the presence of Salmonella according to the U.S. Food and Drug Administration’s Bacteriological Analytical Manual (BAM) Chapter 5 . DNA extraction In trial 1, four kits, consisting of three from Qiagen (Germantown, MD), namely AllPrep PowerViral DNA/RNA Kit (AllPrep kit in short), DNeasy Blood & Tissue Kit (BloodTissue kit in short), and DNeasy PowerSoil Kit (PowerSoil kit in short), and one from Zymo Research (Irvine, CA), ZymoBIOMICS DNA Miniprep Kit (Zymo kit in short), were used. Three of these kits were bead-based, whereas the BloodTissue kit was enzyme-based. All DNA extraction protocols were performed in triplicate following the manufacturers’ instructions (Gram-positive protocol for the BloodTissue kit) with slight modifications as noted below. In trial 2, DNA extraction was done with the Zymo kit by two analysts independently. All pellets were pretreated with 20 µL of proteinase K (20 mg/mL, Zymo Research) at 56°C for 1 h before proceeding with DNA extractions, except the BloodTissue kit where proteinase K was part of the Gram-positive protocol. For the Zymo kit, bead-beating used Vortex-Genie 2 for 20 min. The sample DNA extracts were quantified using the Quant-iT Broad-Range or High-Sensitivity dsDNA Assay Kit on a Qubit fluorometer (Thermo Fisher Scientific, Waltham, MA). 16S rRNA gene amplicon sequencing In trial 1, 16S rRNA gene amplicon sequencing was performed through the ZymoBIOMICS Targeted Sequencing Service (Zymo Research), whereas in trial 2, both Zymo service and in-house sequencing were performed. For the Zymo service, all reagents were from Zymo Research unless specified otherwise. Custom primers (proprietary) were used to amplify the 16S rRNA gene V3-V4 region. PCR reactions were performed in the CFX96 Real-Time PCR Detection System (Bio-Rad, Hercules, CA). For animal food, peptide nucleic acid (PNA) blockers were added to prevent the amplification of chloroplast and mitochondrial DNA . Sequencing libraries were prepared with the Quick-16S NGS Library Prep Kit. The pooled library was cleaned up with Select-a-Size DNA Clean & Concentrator and quantified with TapeStation (Agilent, Santa Clara, CA) and Qubit. The final 16S library was sequenced on a MiSeq system (Illumina, San Diego, CA) using the MiSeq Reagent Kit v3 (600 cycles) with a 25% PhiX spike-in. For in-house sequencing, the V3–V4 region of the 16S rRNA gene was targeted with primers Bakt_341F (5′-CCTACGGGNGGCWGCAG-3′) and Bakt_805R (5′-GACTACHVGGGTATCTAATCC-3′) . PCR reactions were carried out in a 25 µL volume containing 1× KAPA HiFi HotStart ReadyMix (Roche, Indianapolis, IN), 0.2 µM of each primer, and 2.5 µL of DNA extracts using conditions described previously . After purification with AMPure XP (Becker Coulter, Indianapolis, IN) and size verification using TapeStation (Agilent), PCRs were performed to attach dual indices and sequencing adapters using the Nextera XT Index kit (Illumina). Up to 96 libraries (4 nM each) were pooled and sequenced with a 25% PhiX spike-in on MiSeq using the MiSeq Reagent kit v3 with 600 cycles (Illumina). Shotgun metagenomics In trial 2, shotgun metagenomic sequencing libraries of the same DNA extracts used for the in-house 16S rRNA gene amplicon sequencing were constructed using the Nextera XT DNA Library Preparation Kit (Illumina). Briefly, tagmentation and tagging of sample DNA extracts with unique adapter sequences were performed using the Nextera XT transposome. Limited-cycle PCRs were used to amplify the tagged DNA and simultaneously add indexes. After purification with AMPure XP (Becker Coulter), each library was normalized to 4 nM concentration, and equal volumes of normalized libraries were pooled, denatured, and loaded with a final pooled library concentration of 1.8 pM to NextSeq 500 (Illumina) for sequencing using the 500/550 High Output Reagent Kit v2 (300 cycles) (Illumina). Taxonomic profiling from the 16S rRNA gene amplicon sequencing data set QIIME2 (v2023.2) was used for the 16S rRNA gene amplicon sequencing analysis. Briefly, primer sequences and any preceding bases were trimmed from the raw reads using the trim-paired command of the cutadapt plugin with default parameters (--p-error-rate = 0.1). Primer-free reads were error-corrected, and amplicon sequence variants (ASVs) were determined using the denoised-paired command of the DADA2 plugin with default parameters (--p-max-ee-f/r = 2, --p-trunq-q = 2, --p-min-overlap = 12) for quality trimming, read-pair merging, and chimeric sequence removal. Manual forward and reverse read truncation and trimming values were determined based on the average read base scores. For taxonomic classification, the V3–V4 region was extracted from the Silva 138 SSURef NR99 reference database using the primer sequences (--p-min-length = 100, --p-max-length = 700, --p-identity = 0.8) prior to training the naive Bayes classifier. This trained classifier was used to assign taxonomies to ASVs using default parameters. Taxonomic and AMR gene profiling of the shotgun metagenomic sequencing data set Kraken 2 (v 2.1.3) was used for taxonomic profiling with the default k-mer size and parameters. Briefly, base calls generated by the NextSeq 500 System were converted to FASTQ files and trimmed for sequencing adaptors and low-quality sequences using Trimmomatic with parameters (ILLUMINACLIP:Illumina-Adapter.fa:2:30:10 LEADING:20 TRAILING:20 SLIDINGWINDOW:5:20 MINLEN:90). Trimmed and filtered reads were used for all further downstream analyses using the Prebuilt Kraken 2 standard plusPF database (June 2024 update; https://benlangmead.github.io/aws-indexes/k2 ), which included RefSeq archaea, bacteria, viruses, plasmids, protozoa, fungi, UniVec Core, and the most recent human reference genome (GRCh38). The microbiome composition and the taxa relative abundances were estimated by Bracken (version 2.7) with a default threshold of 10. AMR gene profiling was performed using the Short, Better Representative Extract Dataset (ShortBRED) (version 0.9.4) . First, ShortBRED-Identify was used to generate unique peptide markers for AMR protein sequences compiled from AMRFinderPlus v4.0.1 (database version, 2024–10-29; https://ftp.ncbi.nlm.nih.gov/pathogen/Antimicrobial_resistance/AMRFinderPlus/database/ ). Specifically, ShortBRED-identify used an 85% amino acid identity threshold to cluster the AMR protein sequences into nonredundant highly conserved protein families. To maintain high specificity, the set of peptides was then blasted against the universal protein reference database UNIREF100 ( https://www.uniprot.org/uniref/ ). ShortBRED-Quantify was used to map translated final sequences at ≥85% amino acid identity across ≥95% of the marker length, normalized to reads per kilobase per million mapped reads (RPKM). Total mapped reads of less than 20 were not considered in the final summary. Statistical analysis All taxonomic and read count data were imported into RStudio (version 2023.12.1) for analysis. Unless stated otherwise, all 16S data sets were rarefied (2,500 minimum read count) prior to calculating alpha and beta diversity measures. Basic alpha diversity measures (observed genera, Simpson’s diversity index, and Pielou’s evenness) were calculated using the vegan community ecology R package (version 2.6–4) . Pairwise comparisons between group means were performed using the Wilcox rank-sum test. For beta diversity measurement, Bray-Curtis dissimilarity values were calculated for each pair of samples using vegan. These values were used to perform a principal coordinate analysis (PCoA) using the ecodist package (version 2.0.9). To determine if there are any statistical differences in the community profiles across sample treatment groups, pairwise PERMANOVA was performed using the pairwiseADONIS package (version 0.4.1) (similarity function = “vegdist,” similarity method = “bray,” P adjustment method = “holm,” permutations = 9,999). The indicspecies package was used to detect the associations between species patterns and combinations of treatment groups using default parameters (permutations = 999). Mean relative abundances of select taxonomic groups were compared across treatments using a Welch’s t -test. Pairwise comparisons between group means were performed using the Wilcox rank-sum test (ANCOM analysis). All plots were visualized using the ggplot2 package (version 3.4.2). Two trials were performed using different sets of bulk animal food samples (18–23 kg) obtained from a local animal food store. These included cattle feed (general-purpose ration for growing and mature beef cattle), dry dog food (complete dog food for all life stages), and poultry feed (complete, general-purpose poultry maintenance feed). From each bulk product, ten 1 kg subsamples were randomly collected and stored at 4°C until analysis. On the day of analysis, a 100 g composite sample (400 g for dry dog food due to low genomic DNA yield) was formed by combining equal amounts from the 10 subsamples. In trial 1, triplicate 25 g test portions (100 g for dry dog food) of the 100 g composites were aseptically weighed out in Whirl-Pak bags and suspended at a 1:9 ratio in modified buffered peptone water (3M Food Safety, St. Paul, MN). The mixtures were hand-massaged for 5 min (dry dog food was homogenized in a stomacher [Seward, West Sussex, UK] at 230 rpm for 2 min). Four sets (one for each DNA extraction kit) of 25 mL rinsates (210 mL for dry dog food) were transferred to 50 mL Falcon tubes and centrifuged at 900 × g for 3 min to remove animal food particles. The supernatants were transferred to new tubes and centrifuged at 10,000 × g for 20 min at 8°C. The resulting pellets (12 per sample type; 36 total) were stored at −20°C for DNA extraction by four kits. In trial 2, duplicate 25 g test portions were analyzed by two analysts independently, resulting in a total of 12 pellets for DNA extraction by one kit. The composite samples were also tested for total aerobic plate counts (APC) by the standard pour plate method and screened for the presence of Salmonella according to the U.S. Food and Drug Administration’s Bacteriological Analytical Manual (BAM) Chapter 5 . In trial 1, four kits, consisting of three from Qiagen (Germantown, MD), namely AllPrep PowerViral DNA/RNA Kit (AllPrep kit in short), DNeasy Blood & Tissue Kit (BloodTissue kit in short), and DNeasy PowerSoil Kit (PowerSoil kit in short), and one from Zymo Research (Irvine, CA), ZymoBIOMICS DNA Miniprep Kit (Zymo kit in short), were used. Three of these kits were bead-based, whereas the BloodTissue kit was enzyme-based. All DNA extraction protocols were performed in triplicate following the manufacturers’ instructions (Gram-positive protocol for the BloodTissue kit) with slight modifications as noted below. In trial 2, DNA extraction was done with the Zymo kit by two analysts independently. All pellets were pretreated with 20 µL of proteinase K (20 mg/mL, Zymo Research) at 56°C for 1 h before proceeding with DNA extractions, except the BloodTissue kit where proteinase K was part of the Gram-positive protocol. For the Zymo kit, bead-beating used Vortex-Genie 2 for 20 min. The sample DNA extracts were quantified using the Quant-iT Broad-Range or High-Sensitivity dsDNA Assay Kit on a Qubit fluorometer (Thermo Fisher Scientific, Waltham, MA). In trial 1, 16S rRNA gene amplicon sequencing was performed through the ZymoBIOMICS Targeted Sequencing Service (Zymo Research), whereas in trial 2, both Zymo service and in-house sequencing were performed. For the Zymo service, all reagents were from Zymo Research unless specified otherwise. Custom primers (proprietary) were used to amplify the 16S rRNA gene V3-V4 region. PCR reactions were performed in the CFX96 Real-Time PCR Detection System (Bio-Rad, Hercules, CA). For animal food, peptide nucleic acid (PNA) blockers were added to prevent the amplification of chloroplast and mitochondrial DNA . Sequencing libraries were prepared with the Quick-16S NGS Library Prep Kit. The pooled library was cleaned up with Select-a-Size DNA Clean & Concentrator and quantified with TapeStation (Agilent, Santa Clara, CA) and Qubit. The final 16S library was sequenced on a MiSeq system (Illumina, San Diego, CA) using the MiSeq Reagent Kit v3 (600 cycles) with a 25% PhiX spike-in. For in-house sequencing, the V3–V4 region of the 16S rRNA gene was targeted with primers Bakt_341F (5′-CCTACGGGNGGCWGCAG-3′) and Bakt_805R (5′-GACTACHVGGGTATCTAATCC-3′) . PCR reactions were carried out in a 25 µL volume containing 1× KAPA HiFi HotStart ReadyMix (Roche, Indianapolis, IN), 0.2 µM of each primer, and 2.5 µL of DNA extracts using conditions described previously . After purification with AMPure XP (Becker Coulter, Indianapolis, IN) and size verification using TapeStation (Agilent), PCRs were performed to attach dual indices and sequencing adapters using the Nextera XT Index kit (Illumina). Up to 96 libraries (4 nM each) were pooled and sequenced with a 25% PhiX spike-in on MiSeq using the MiSeq Reagent kit v3 with 600 cycles (Illumina). In trial 2, shotgun metagenomic sequencing libraries of the same DNA extracts used for the in-house 16S rRNA gene amplicon sequencing were constructed using the Nextera XT DNA Library Preparation Kit (Illumina). Briefly, tagmentation and tagging of sample DNA extracts with unique adapter sequences were performed using the Nextera XT transposome. Limited-cycle PCRs were used to amplify the tagged DNA and simultaneously add indexes. After purification with AMPure XP (Becker Coulter), each library was normalized to 4 nM concentration, and equal volumes of normalized libraries were pooled, denatured, and loaded with a final pooled library concentration of 1.8 pM to NextSeq 500 (Illumina) for sequencing using the 500/550 High Output Reagent Kit v2 (300 cycles) (Illumina). QIIME2 (v2023.2) was used for the 16S rRNA gene amplicon sequencing analysis. Briefly, primer sequences and any preceding bases were trimmed from the raw reads using the trim-paired command of the cutadapt plugin with default parameters (--p-error-rate = 0.1). Primer-free reads were error-corrected, and amplicon sequence variants (ASVs) were determined using the denoised-paired command of the DADA2 plugin with default parameters (--p-max-ee-f/r = 2, --p-trunq-q = 2, --p-min-overlap = 12) for quality trimming, read-pair merging, and chimeric sequence removal. Manual forward and reverse read truncation and trimming values were determined based on the average read base scores. For taxonomic classification, the V3–V4 region was extracted from the Silva 138 SSURef NR99 reference database using the primer sequences (--p-min-length = 100, --p-max-length = 700, --p-identity = 0.8) prior to training the naive Bayes classifier. This trained classifier was used to assign taxonomies to ASVs using default parameters. Kraken 2 (v 2.1.3) was used for taxonomic profiling with the default k-mer size and parameters. Briefly, base calls generated by the NextSeq 500 System were converted to FASTQ files and trimmed for sequencing adaptors and low-quality sequences using Trimmomatic with parameters (ILLUMINACLIP:Illumina-Adapter.fa:2:30:10 LEADING:20 TRAILING:20 SLIDINGWINDOW:5:20 MINLEN:90). Trimmed and filtered reads were used for all further downstream analyses using the Prebuilt Kraken 2 standard plusPF database (June 2024 update; https://benlangmead.github.io/aws-indexes/k2 ), which included RefSeq archaea, bacteria, viruses, plasmids, protozoa, fungi, UniVec Core, and the most recent human reference genome (GRCh38). The microbiome composition and the taxa relative abundances were estimated by Bracken (version 2.7) with a default threshold of 10. AMR gene profiling was performed using the Short, Better Representative Extract Dataset (ShortBRED) (version 0.9.4) . First, ShortBRED-Identify was used to generate unique peptide markers for AMR protein sequences compiled from AMRFinderPlus v4.0.1 (database version, 2024–10-29; https://ftp.ncbi.nlm.nih.gov/pathogen/Antimicrobial_resistance/AMRFinderPlus/database/ ). Specifically, ShortBRED-identify used an 85% amino acid identity threshold to cluster the AMR protein sequences into nonredundant highly conserved protein families. To maintain high specificity, the set of peptides was then blasted against the universal protein reference database UNIREF100 ( https://www.uniprot.org/uniref/ ). ShortBRED-Quantify was used to map translated final sequences at ≥85% amino acid identity across ≥95% of the marker length, normalized to reads per kilobase per million mapped reads (RPKM). Total mapped reads of less than 20 were not considered in the final summary. All taxonomic and read count data were imported into RStudio (version 2023.12.1) for analysis. Unless stated otherwise, all 16S data sets were rarefied (2,500 minimum read count) prior to calculating alpha and beta diversity measures. Basic alpha diversity measures (observed genera, Simpson’s diversity index, and Pielou’s evenness) were calculated using the vegan community ecology R package (version 2.6–4) . Pairwise comparisons between group means were performed using the Wilcox rank-sum test. For beta diversity measurement, Bray-Curtis dissimilarity values were calculated for each pair of samples using vegan. These values were used to perform a principal coordinate analysis (PCoA) using the ecodist package (version 2.0.9). To determine if there are any statistical differences in the community profiles across sample treatment groups, pairwise PERMANOVA was performed using the pairwiseADONIS package (version 0.4.1) (similarity function = “vegdist,” similarity method = “bray,” P adjustment method = “holm,” permutations = 9,999). The indicspecies package was used to detect the associations between species patterns and combinations of treatment groups using default parameters (permutations = 999). Mean relative abundances of select taxonomic groups were compared across treatments using a Welch’s t -test. Pairwise comparisons between group means were performed using the Wilcox rank-sum test (ANCOM analysis). All plots were visualized using the ggplot2 package (version 3.4.2). Benchmarking against a well-defined mock microbial community, we first compared the taxonomic classification using DNA extracted from four commercial kits and analyzed by both 16S rRNA gene amplicon sequencing and shotgun metagenomic sequencing . In animal food trial 1, we evaluated the effects of DNA extraction kit on microbiome analysis and investigated whether applying PNA blockers during 16S rRNA gene amplicon library preparation would effectively inhibit chloroplast and mitochondrial DNA amplification compared with post-sequencing in silico filtering of relevant reads from the 16S rRNA gene amplicon sequencing data set. In animal food trial 2, we profiled the microbiomes from both sequencing approaches and resistomes from the shotgun metagenomics data set. Workflow optimization with the mock microbial community Out of the four DNA extraction kits, both metagenomic sequencing approaches showed that the Zymo kit generated taxonomic profiles most closely resembling the mock community composition . For this kit, among the nine bead-beating conditions evaluated (PowerLyzer Homogenizer [Qiagen] at 4,000 rpm for 1, 3, and 5 min and at 5,000 rpm for 1 and 3 min, and Vortex-Genie 2 [Scientific Industries, Inc., Bohemia, NY] at maximum speed for 10, 20, 30, and 40 min), bead-beating on Vortex-Genie 2 for 20 min performed best . Animal food microbial counts and DNA extract concentrations by kit Among the three types of animal food samples, the APCs ranged from 7.9 × 10 2 CFU/g in dry dog food to 8.7 × 10 2 CFU/g in poultry feed and to 6.8 × 10 3 CFU/g in cattle feed. The AllPrep and Zymo kits yielded the highest average DNA concentrations across all animal food types (3.8 ± 3.9 ng/µL and 2.8 ± 2.7 ng/µL, respectively) with lower yields from the BloodTissue (2.1 ± 0.8 ng/µL) and PowerSoil (2.1 ± 1.8 ng/µL) kits. These differences were not significant as shown in post-hoc comparisons using Tukey’s HSD (all adjusted P -values > 0.05). One-way ANOVA revealed that DNA yields varied significantly by animal food type (F = [19.45], P = 2.6 × 10 −6 ) with cattle feed having the highest DNA concentrations (5.0 ± 3.0 ng/µL), followed by poultry feed (2.7 ± 0.7 ng/µL) and dry dog food (0.4 ± 0.5 ng/µL). Tukey’s HSD also found significant differences in mean DNA concentration among animal food types (all adjusted P -values < 0.05) with the highest DNA concentration in cattle feed with the AllPrep kit and the lowest one in dry dog food with the Zymo kit. For dry dog food, which had the lowest overall DNA concentration, the yield was highest using the BloodTissue kit (1.3 ± 0.2 ng/µL), followed by the AllPrep kit (0.4 ± 0.03 ng/µL). Animal food microbiomes differed by PNA blocker and DNA extraction kit when analyzed by 16S rRNA gene amplicon sequencing In trial 1, the average number of raw reads by 16S rRNA gene amplicon sequencing was 3.3 × 10 4 ± 1.6 × 10 4 , including large quantities of chloroplast (up to 74.3%) and mitochondrial reads (up to 43.1%). PNA blockers were then applied during 16S rRNA gene amplicon library preparation and compared with post-sequencing in silico filtering of chloroplast- and mitochondria-related reads. Distinct microbial communities were observed across the animal food types as well as between blocked and unblocked paired samples . After in silico removal of chloroplast and mitochondrial sequences, cattle and poultry feed communities were dominated by members of the order Enterobacterales from several genera including Pantoea (19.8%–31.9%) and Kosakonia (3.3%–13.0%) with minor contributions from Erwinia and Klebsiella (≤ 2%). Both cattle and poultry feed communities also contained elevated levels of orders Pseudomonadales (5.4%–23.0%), Xanthomonadales (3.7%–22.6%), and Micrococcales (1.1%–16.8%). Dry dog food samples possessed a distinct microbial community dominated by Bacillales (6.3%–92.5%) comprised predominantly of the genus Bacillus (10.8%–41.9%), with lesser representations by the genera Virgibacillus, Oceanobacillus , Paenibacillus , and Pseudogracilibacillus . Community compositions within dry dog food samples were also highly variable and dependent on the DNA extraction kit . The largest differences were observed with the BloodTissue kit where the relative abundance of Bacillales was significantly lower compared with those extracted with other kits ( P < 0.05). This contrasted with both the cattle and poultry feed samples that showed more consistent community profiles across DNA extraction kits . Filtering non-bacterial chloroplast and mitochondrial reads from the samples resulted in a significant reduction in the mean number of usable reads between unfiltered and filtered data sets (unfiltered, mean = 4.1 × 10 4 ± 1.4 × 10 4 reads; filtered, mean = 2.5 × 10 4 ± 1.4 × 10 4 reads; P = 5.2 × 10 −10 ). In the absence of PNA blockers, the mean fraction of classified microbial reads across all animal food samples was 33.5% ± 18.2%. The addition of PNA blockers prior to sequencing significantly reduced the proportion of chloroplast and mitochondrial sequences even in the absence of filtering . However, the blockers were more effective at depleting chloroplast-derived template with a mean reduction of ~99%, whereas there was 57% ± 26% reduction in mitochondrial sequences. Pairwise PERMANOVA performed for each combination of blocked and filtered data showed that both blocking and filtering have a significant impact on overall community composition . The only combination of data sets not shown to be significantly different were the two filtered communities ( P = 0.99) . Differences in community composition appear to be driven, in part, by the increased biodiversity detected in blocked samples. Compared with other combinations, unblocked and unfiltered samples had significantly lower observed genera counts, as well as Simpson’s diversity and Pielou’s evenness values . The increased biodiversity detected in blocked samples was further supported by an indicator species analysis that showed 35 genera were significantly associated with blocked samples . Those included several genera known to contain important human and animal pathogens such as Acinetobacter, Clostridium , Escherichia / Shigella , and Peptostreptococcus . No bacterial genera were significantly associated with unblocked samples suggesting that the addition of blockers did not result in the depletion of microbial sequences. Animal food microbiome comparisons by 16S rRNA gene amplicon sequencing and shotgun metagenomics side-by-side In trial 2, an average of 172,308 (range: 128,247–227,580) and 15,314,408 (range: 11,692,126–24,541,655) reads were obtained across all animal food samples by 16S rRNA gene amplicon sequencing and shotgun metagenomics, respectively. Species accumulation curves were generated for all 16S rRNA gene amplicon sequencing and shotgun metagenomics samples (data not shown). All curves reached stationarity, suggesting that increased sampling effort would not significantly increase the number of observed microbial species. An average of 42.0% and 6.1% reads were mapped to bacteria by 16S rRNA gene amplicon sequencing and shotgun metagenomics, respectively. Dry dog food was not able to be analyzed by shotgun metagenomics due to low DNA concentrations (< 0.2 ng/µL) in all replicates. Consistent with trial 1, distinct microbial communities were identified across the three animal food types (all P < 0.05) . Both cattle and poultry feed samples were dominated by the members of the orders Bacillales, Enterobacterales, and Pseudomonadales, whereas the dry dog food samples had a more diverse community comprised predominantly of Bacillales, Burkholderiales, Clostridiales, and Rhizobiales. The choice of the sequencing approach did not significantly alter the community profiles within animal food types (all P > 0.25); however, differences could still be observed in relative abundances at the order taxonomic rank . A limited number of bacterial orders were associated with specific sequencing approaches. These included Rhizobiales and Staphylococcales, which were identified exclusively in samples analyzed by 16S rRNA gene amplicon sequencing, whereas the order Lysobacterales was identified exclusively in samples sequenced using shotgun metagenomics. Differences in community profiles between the two sequencing approaches were also observed at the genus level within the predominant microbial orders. The primary genera included Pantoea (18.6%–39.2%), Enterobacter (0%–18.3%), Kasakonia (2.9%–13.7%), and Klebsiella (0%–5.0%). Detection of Pantoea, Enterobacter, and Kasakonia appeared to be independent of the sequencing approach, as they were identified at similar frequencies across all samples. However, Klebsiella was identified at a higher rate in shotgun metagenomics data set (3.5% ± 0.9%) than 16S rRNA gene amplicon sequencing data set (0.5% ± 0.7%) ( P = 1.05 × 10 −7 ). A similar trend was observed within the order Bacillales, with the primary genera identified as Bacillus (cattle and poultry feeds) and Anaerobacillus (dry dog food) using the 16S rRNA gene amplicon sequencing. This contrasted with the shotgun metagenomics data set, which contained predominantly Priestia for cattle and poultry feeds (no dry dog food shotgun data available). Direct species-level comparisons between 16S rRNA gene amplicon and shotgun metagenomic sequences were not possible due to drastic differences in taxonomic resolution . Generally, both methods were able to successfully classify microbial reads down to the genus level with 16S rRNA gene amplicon sequencing outperforming shotgun metagenomics at higher taxonomic ranks. However, at the species level, shotgun metagenomics analysis was able to classify 74.4% ± 4.1% of microbial reads, which was significantly higher than 16S rRNA gene amplicon sequencing of 9.9% ± 4.8% ( P < 0.05). Animal food resistomes by shotgun metagenomic sequencing In trial 2, shotgun metagenomic sequencing of cattle and poultry feed samples revealed 10 AMR gene/protein families. The relative abundances of the detected AMR genes are shown in . Although the overall prevalence of AMR genes in animal feed samples was low, we identified resistance genes encoding beta-lactamases ( bla CMH-1 and bla ACT-GC1 gene/protein families), erythromycin/lincomysin/pristinamycin/tylosin ( erm (O)), quinolone ( qnrE2 gene/protein family), and fosfomycin ( fosA8 ) from cattle feed. Additionally, we found phenicol/quinolone resistance genes ( oqxA2 and oqxB29 gene/protein families) in both cattle and poultry feed samples. Furthermore, we identified three multidrug (MDR) efflux pump genes ( emrD , norM , and kdeA ) that have the potential to confer resistance to different antimicrobials and dyes. Out of the four DNA extraction kits, both metagenomic sequencing approaches showed that the Zymo kit generated taxonomic profiles most closely resembling the mock community composition . For this kit, among the nine bead-beating conditions evaluated (PowerLyzer Homogenizer [Qiagen] at 4,000 rpm for 1, 3, and 5 min and at 5,000 rpm for 1 and 3 min, and Vortex-Genie 2 [Scientific Industries, Inc., Bohemia, NY] at maximum speed for 10, 20, 30, and 40 min), bead-beating on Vortex-Genie 2 for 20 min performed best . Among the three types of animal food samples, the APCs ranged from 7.9 × 10 2 CFU/g in dry dog food to 8.7 × 10 2 CFU/g in poultry feed and to 6.8 × 10 3 CFU/g in cattle feed. The AllPrep and Zymo kits yielded the highest average DNA concentrations across all animal food types (3.8 ± 3.9 ng/µL and 2.8 ± 2.7 ng/µL, respectively) with lower yields from the BloodTissue (2.1 ± 0.8 ng/µL) and PowerSoil (2.1 ± 1.8 ng/µL) kits. These differences were not significant as shown in post-hoc comparisons using Tukey’s HSD (all adjusted P -values > 0.05). One-way ANOVA revealed that DNA yields varied significantly by animal food type (F = [19.45], P = 2.6 × 10 −6 ) with cattle feed having the highest DNA concentrations (5.0 ± 3.0 ng/µL), followed by poultry feed (2.7 ± 0.7 ng/µL) and dry dog food (0.4 ± 0.5 ng/µL). Tukey’s HSD also found significant differences in mean DNA concentration among animal food types (all adjusted P -values < 0.05) with the highest DNA concentration in cattle feed with the AllPrep kit and the lowest one in dry dog food with the Zymo kit. For dry dog food, which had the lowest overall DNA concentration, the yield was highest using the BloodTissue kit (1.3 ± 0.2 ng/µL), followed by the AllPrep kit (0.4 ± 0.03 ng/µL). In trial 1, the average number of raw reads by 16S rRNA gene amplicon sequencing was 3.3 × 10 4 ± 1.6 × 10 4 , including large quantities of chloroplast (up to 74.3%) and mitochondrial reads (up to 43.1%). PNA blockers were then applied during 16S rRNA gene amplicon library preparation and compared with post-sequencing in silico filtering of chloroplast- and mitochondria-related reads. Distinct microbial communities were observed across the animal food types as well as between blocked and unblocked paired samples . After in silico removal of chloroplast and mitochondrial sequences, cattle and poultry feed communities were dominated by members of the order Enterobacterales from several genera including Pantoea (19.8%–31.9%) and Kosakonia (3.3%–13.0%) with minor contributions from Erwinia and Klebsiella (≤ 2%). Both cattle and poultry feed communities also contained elevated levels of orders Pseudomonadales (5.4%–23.0%), Xanthomonadales (3.7%–22.6%), and Micrococcales (1.1%–16.8%). Dry dog food samples possessed a distinct microbial community dominated by Bacillales (6.3%–92.5%) comprised predominantly of the genus Bacillus (10.8%–41.9%), with lesser representations by the genera Virgibacillus, Oceanobacillus , Paenibacillus , and Pseudogracilibacillus . Community compositions within dry dog food samples were also highly variable and dependent on the DNA extraction kit . The largest differences were observed with the BloodTissue kit where the relative abundance of Bacillales was significantly lower compared with those extracted with other kits ( P < 0.05). This contrasted with both the cattle and poultry feed samples that showed more consistent community profiles across DNA extraction kits . Filtering non-bacterial chloroplast and mitochondrial reads from the samples resulted in a significant reduction in the mean number of usable reads between unfiltered and filtered data sets (unfiltered, mean = 4.1 × 10 4 ± 1.4 × 10 4 reads; filtered, mean = 2.5 × 10 4 ± 1.4 × 10 4 reads; P = 5.2 × 10 −10 ). In the absence of PNA blockers, the mean fraction of classified microbial reads across all animal food samples was 33.5% ± 18.2%. The addition of PNA blockers prior to sequencing significantly reduced the proportion of chloroplast and mitochondrial sequences even in the absence of filtering . However, the blockers were more effective at depleting chloroplast-derived template with a mean reduction of ~99%, whereas there was 57% ± 26% reduction in mitochondrial sequences. Pairwise PERMANOVA performed for each combination of blocked and filtered data showed that both blocking and filtering have a significant impact on overall community composition . The only combination of data sets not shown to be significantly different were the two filtered communities ( P = 0.99) . Differences in community composition appear to be driven, in part, by the increased biodiversity detected in blocked samples. Compared with other combinations, unblocked and unfiltered samples had significantly lower observed genera counts, as well as Simpson’s diversity and Pielou’s evenness values . The increased biodiversity detected in blocked samples was further supported by an indicator species analysis that showed 35 genera were significantly associated with blocked samples . Those included several genera known to contain important human and animal pathogens such as Acinetobacter, Clostridium , Escherichia / Shigella , and Peptostreptococcus . No bacterial genera were significantly associated with unblocked samples suggesting that the addition of blockers did not result in the depletion of microbial sequences. In trial 2, an average of 172,308 (range: 128,247–227,580) and 15,314,408 (range: 11,692,126–24,541,655) reads were obtained across all animal food samples by 16S rRNA gene amplicon sequencing and shotgun metagenomics, respectively. Species accumulation curves were generated for all 16S rRNA gene amplicon sequencing and shotgun metagenomics samples (data not shown). All curves reached stationarity, suggesting that increased sampling effort would not significantly increase the number of observed microbial species. An average of 42.0% and 6.1% reads were mapped to bacteria by 16S rRNA gene amplicon sequencing and shotgun metagenomics, respectively. Dry dog food was not able to be analyzed by shotgun metagenomics due to low DNA concentrations (< 0.2 ng/µL) in all replicates. Consistent with trial 1, distinct microbial communities were identified across the three animal food types (all P < 0.05) . Both cattle and poultry feed samples were dominated by the members of the orders Bacillales, Enterobacterales, and Pseudomonadales, whereas the dry dog food samples had a more diverse community comprised predominantly of Bacillales, Burkholderiales, Clostridiales, and Rhizobiales. The choice of the sequencing approach did not significantly alter the community profiles within animal food types (all P > 0.25); however, differences could still be observed in relative abundances at the order taxonomic rank . A limited number of bacterial orders were associated with specific sequencing approaches. These included Rhizobiales and Staphylococcales, which were identified exclusively in samples analyzed by 16S rRNA gene amplicon sequencing, whereas the order Lysobacterales was identified exclusively in samples sequenced using shotgun metagenomics. Differences in community profiles between the two sequencing approaches were also observed at the genus level within the predominant microbial orders. The primary genera included Pantoea (18.6%–39.2%), Enterobacter (0%–18.3%), Kasakonia (2.9%–13.7%), and Klebsiella (0%–5.0%). Detection of Pantoea, Enterobacter, and Kasakonia appeared to be independent of the sequencing approach, as they were identified at similar frequencies across all samples. However, Klebsiella was identified at a higher rate in shotgun metagenomics data set (3.5% ± 0.9%) than 16S rRNA gene amplicon sequencing data set (0.5% ± 0.7%) ( P = 1.05 × 10 −7 ). A similar trend was observed within the order Bacillales, with the primary genera identified as Bacillus (cattle and poultry feeds) and Anaerobacillus (dry dog food) using the 16S rRNA gene amplicon sequencing. This contrasted with the shotgun metagenomics data set, which contained predominantly Priestia for cattle and poultry feeds (no dry dog food shotgun data available). Direct species-level comparisons between 16S rRNA gene amplicon and shotgun metagenomic sequences were not possible due to drastic differences in taxonomic resolution . Generally, both methods were able to successfully classify microbial reads down to the genus level with 16S rRNA gene amplicon sequencing outperforming shotgun metagenomics at higher taxonomic ranks. However, at the species level, shotgun metagenomics analysis was able to classify 74.4% ± 4.1% of microbial reads, which was significantly higher than 16S rRNA gene amplicon sequencing of 9.9% ± 4.8% ( P < 0.05). In trial 2, shotgun metagenomic sequencing of cattle and poultry feed samples revealed 10 AMR gene/protein families. The relative abundances of the detected AMR genes are shown in . Although the overall prevalence of AMR genes in animal feed samples was low, we identified resistance genes encoding beta-lactamases ( bla CMH-1 and bla ACT-GC1 gene/protein families), erythromycin/lincomysin/pristinamycin/tylosin ( erm (O)), quinolone ( qnrE2 gene/protein family), and fosfomycin ( fosA8 ) from cattle feed. Additionally, we found phenicol/quinolone resistance genes ( oqxA2 and oqxB29 gene/protein families) in both cattle and poultry feed samples. Furthermore, we identified three multidrug (MDR) efflux pump genes ( emrD , norM , and kdeA ) that have the potential to confer resistance to different antimicrobials and dyes. This study is one of the first attempts employing both 16S rRNA gene amplicon sequencing and shotgun metagenomics to characterize the microbiomes and resistomes in three types of animal food products. Our analysis workflows include genomic DNA extraction, PCR amplification of the 16S V3–V4 region (for 16S rRNA gene amplicon sequencing only), library preparation, sequencing on MiSeq (for 16S rRNA gene amplicon sequencing) and NextSeq (for shotgun metagenomics), and bioinformatic analysis (read processing, taxonomy classification, and AMR gene identification). Since there are no previous studies focused on metagenomic analyses of animal food, we pivoted our efforts to determine which DNA extraction method(s) most accurately reconstructed the mock community structure. We followed up by evaluating the workflows in animal food and showed the critical need to remove chloroplast and mitochondrial DNA using two different strategies. Finally, we obtained microbiomes in these animal food sample types using both sequencing approaches and resistomes using shotgun metagenomics. Many surveys have indicated that the bacterial genera/species in the ZymoBIOMICS mock microbial community included those commonly found in animal food such as Salmonella , E. coli , Enterococcus , Listeria , Bacillus , and Lactobacillus . Nonetheless, we acknowledge a major limitation of this study, which is not spiking the mock community in animal food and analyzing that mixture en masse . As shown in the supplemental material, our mock community analysis demonstrated that the AllPrep and Zymo kits were superior among the four kits tested, at extracting genomic DNA from the eight bacteria using 16S rRNA gene amplicon and shotgun metagenomic sequencing. The PowerSoil and Zymo kits were able to extract genomic DNA from the two fungi, shown in the shotgun metagenomics data set. The analyses also indicated that bead-beating on Vortex-Genie 2 for 20 min was optimal for the Zymo kit. These represent important efforts toward standardization of an essential step in both sequencing workflows. Similar efforts to standardize DNA extraction methods have been reported in human gut and urine samples . Typically, extended bead-beating times are associated with greater DNA sheering and reduced microbial diversity as they could lead to loss of fragments, incomplete or inefficient amplification, and inaccurate representation of the microbial community diversity. However, consistent community profiles were observed even after 40 min of bead-beating using the Vortex Genie. This is likely due to the lower speed (~3,000 rpm) and relative inefficiency of benchtop vortexers compared with dedicated tissue homogenizers. A recent study compared various lysis protocols (thermal, enzymatic, and mechanical [bead]), and for the bead-based method, over 40 different bead material/size combinations and cell disruptor type/intensity/run time combinations; this study proposed the use of Measurement Integrity Quotient score (range: 0–100, assigned by measuring the root mean square error of observed relative abundances that fall outside the band of manufacturing tolerance, which is 15% for the ZymoBIOMICS mock community) for easy comparison of the methods, in order to reduce bias associated with DNA extraction . When applied to animal food, not surprisingly, taxonomic profiles generated using 16S rRNA gene amplicon sequencing varied greatly based on the DNA extraction kits used . It is also noteworthy that the concentrations of DNA extracts in animal food samples (average: 0.4–5.0 ng/μL) were lower than those for the mock community (average: 10.6 ng/µL), particularly for dry dog food. Industrial dry dog food is usually processed at temperatures of 80–160°C under high pressure during the extrusion step, which significantly reduces the bacterial load. Since we used a pelleting method to obtain microbial cells from the animal food samples, the lack of intact cells greatly reduced the size of the pellets, thereby resulting in lower DNA concentrations being extracted. A recent study showed that APCs in 50% of the dry dog food samples tested did not exceed 10 2 CFU/g . Nonetheless, the difficulty with extracting sufficient concentrations of genomic DNA from animal food, in general, contributed in part to it being a challenging matrix for metagenomic analysis. Our preliminary data clearly indicated that chloroplast and mitochondria consumed significant amounts of 16S rRNA gene amplicon sequencing reads . However, we found that the application of 16S rRNA sequencing PNA blockers prior to PCR amplification significantly reduced these non-bacterial contaminants in the final community profile. The reduction of chloroplast and mitochondrial reads in the sequencing data sets had the added benefit of increasing the detection rate of relatively low abundant microorganisms . This is particularly useful in food microbiology studies where pathogenic organisms of interest can exist at concentrations at or below the level of detection using marker gene surveys. If the study design does not require the detection of rare species or depth of sequencing is not a consideration, it may be advisable to forego the application of PNA blockers and simply remove the chloroplast and mitochondrial sequence reads from the data sets post-sequencing. In this study, both blocked and unblocked samples had similar community profiles after filtering across diverse animal food types. Several robust tools have been developed and incorporated into the bioinformatic pipelines used for targeted gene sequence analyses, making the removal of contaminating sequence reads easier. It is widely accepted that 16S rRNA gene amplicon sequencing and shotgun metagenomic sequencing generally serve two different purposes for microbiome analysis . Targeting the 16S rRNA gene allows the identification of relatively high and low abundant taxa and is economical. Shotgun metagenomic sequencing allows for high-resolution taxonomic classification due to the higher sequencing depth and broader genome coverage compared with 16S rRNA gene amplicon sequencing, and it is capable of identifying genomic features such as serotypes, AMR genes, and virulence factors, among others. In our study, both sequencing approaches allowed comprehensive characterization of bacterial diversity in the three types of animal food samples, although the resolution for 16S rRNA gene amplicon sequencing was obviously lower (shown at the species level) . Interestingly, we observed that 16S rRNA gene amplicon sequencing had better resolutions than shotgun metagenomics at higher taxonomic ranks (phylum to genus; Fig. 4), which corroborated with a recent report . Shotgun metagenomic sequencing required higher quality DNA or a better multiplexing strategy, as some samples did not pass the quality filter or read threshold. This limitation was more prominent in resistome characterization. For instance, dry dog food proved to be a challenging matrix and yielded low DNA concentrations. Low microbial biomass and high concentrations of matrix components resulted in the majority of reads (93.9%) in the shotgun metagenomic data set being unclassified. Many of these reads may be attributed to non-prokaryotic DNA or to previously uncultured/uncharacterized microorganisms. A global metagenomic analysis of AMR in sewage reported that across all data sets, only 0.3% of the reads were assigned to 16/18S rRNA, and of these, 96.8% and 2.9% were mapped to bacteria and eukaryotes, respectively, with just 0.05% of the reads assigned to AMR genes . All of these factors led to challenges with both taxonomic composition and AMR gene identification in complex animal food communities. The predominant microbial taxa varied by animal food type using both sequencing approaches . The differences in community composition are likely due to different base ingredients used to make the finished product. We identified 10 AMR gene/protein families conferring resistance to multiple antimicrobials . These warrant further studies where larger sample sizes are examined. For studies where the only interest is in microbial community composition, both 16S rRNA gene amplicon sequencing and shotgun metagenomics may be used. However, when identifying AMR genes is an essential requirement, shotgun metagenomics should be the technique of choice. Nonetheless, this study provides a snapshot of in-depth microbiomes and resistomes in animal food using both sequencing approaches. These promising next-generation sequencing technologies, upon further standardization, will be valuable tools to help better understand the bacterial and AMR gene diversity in animal food to guide pathogen control and AMR prevention efforts. |
Laboratory evaluation of the miniature direct-on-blood PCR nucleic acid lateral flow immunoassay (mini-dbPCR-NALFIA), a simplified molecular diagnostic test for | 0a860f16-3020-40ff-834f-15f23548ec82 | 10024383 | Pathology[mh] | Correct and timely diagnosis of malaria is key in the management and control of this disease. Traditionally, microscopy of Giemsa-stained thick and thin blood film has been the standard diagnostic technique applied in endemic settings. Although it is able to differentiate the causative Plasmodium species, its sensitivity for low parasite densities is limited and adequate slide reading requires extensive training and experience . The development of rapid diagnostic tests (RDTs) has brought a fast and easy-to-use alternative for malaria diagnosis. Since their introduction, RDTs have proven to be an essential tool for malaria control in remote endemic regions . However, they usually do not detect < 100 parasites per microliter of blood, which makes them of limited use in near-elimination areas where such low parasite counts are often prevalent . False-negative RDT results can also arise for P. falciparum strains with a genetic deletion for the antigen targeted by RDTs, histidine-rich protein 2 (HRP2). Over the past decade, this genotype has become widespread in South America, and increasing prevalence has now been reported for African and Asian countries as well . Conversely, residual parasite antigen in the blood after treatment and complete parasite clearance is frequently observed and may result in false-positive RDT diagnosis . The limitations of microscopy and RDTs can be overcome by the use of nucleic acid amplification techniques (NAATs) . Examples are endpoint polymerase chain reaction (PCR) and real-time quantitative PCR (qPCR), techniques that are commonly applied for malaria diagnosis and research in high-resource settings . However, the requirement of well-trained laboratory personnel as well as expensive PCR machines that rely on a stable power source, restrict the use of NAATs in malaria-endemic countries. An alternative to PCR is loop-mediated isothermal amplification (LAMP), a simplified molecular assay with an easy readout that makes use of isothermal DNA amplification . Nevertheless, current LAMP formats are generally unsuited for multiplex amplification, hampering Plasmodium species differentiation . Consequently, there is still a need for a highly sensitive, user-friendly and field-deployable diagnostic test for malaria that can discriminate Plasmodium species. An innovative assay has recently been developed to meet these requirements: the miniature direct-on-blood PCR nucleic acid lateral flow immunoassay (mini-dbPCR-NALFIA, Fig. ). This platform combines three techniques to overcome the issues encountered when attempting to implement traditional PCR methods in limited-resource settings. First of all, the direct-on-blood PCR (dbPCR) uses a specialized reagent mix that eliminates the need of DNA extraction prior to amplification . Instead, the PCR can be performed directly on a template of EDTA-anticoagulated whole blood. The dbPCR also has a duplex format which can detect all (pan) Plasmodium species infecting humans and differentiate P. falciparum infections. The second innovative element is the use of a miniature thermal cycler to run the dbPCR, called miniPCR (miniPCR bio, Massachusetts, USA). It is a hand-held, portable device that can be programmed with a smartphone or laptop application, either through USB cable or Bluetooth connection. The latest model, mini16, has an affordable price of approximately 800 USD (compared to 3000–5000 USD for a conventional PCR thermal cycler) and can process 16 samples per run. The mini16 can run on mains power, but also on a portable and solar-chargeable power pack, making the system completely autonomous and suitable for rural or emergency settings with unstable or no electricity supply. Finally, the result of the dbPCR is easily and rapidly read out with NALFIA, an immunochromatographic flow strip that can detect labelled PCR amplicons . A NALFIA strip is placed in a mixture of dbPCR product and running buffer, after which the dbPCR amplicons will flow over the strip. Neutravidin-labelled carbon particles on the NALFIA strip will bind to the labelled dbPCR amplicons, and this complex is visualized within 10 min when it is captured by the two amplicon-specific antibody lines on the NALFIA strip. Earlier prototypes of the dbPCR-NALFIA assay have shown promising results in field evaluations, with sensitivity and specificity results up to 97.2% and 95.5%, respectively, using light microscopy as reference standard, and a detection limit for P. falciparum infections of 1 parasite per microlitre (p/μL) of blood . In these studies, the dbPCR was still run on a conventional thermal cycler. By optimizing the dbPCR protocol, the mini-dbPCR-NALFIA can now be run on a miniPCR device, making the method better adapted to field settings with limited resources. This article describes the laboratory evaluation of the optimized mini-dbPCR-NALFIA as a multiplex assay for the detection of pan- Plasmodium and P. falciparum infections in blood. Direct-on-blood PCR reagent mix The dbPCR is a duplex reaction targeting two regions in the Plasmodium 18S rRNA gene: one that is highly conserved in the genus Plasmodium (the pan- Plasmodium target), and a second that is specific for P. falciparum . By using 5’-labelled primer pairs (Eurogentec, Liège, Belgium) previously described in literature, both target amplicons will carry a biotin label and a target-specific label (Table ) . The dbPCR reagent mix consists of 10 μL of 2× Phusion Blood PCR buffer (Thermo Fisher Scientific, Waltham, MA, USA), 0.1 μL of Phire Hot Start II DNA polymerase (Thermo Fisher Scientific), labelled primers and sterile water to make a total volume of 22.5 μL per sample. Direct-on-blood PCR on miniature thermal cycler The template format for the dbPCR is 2.5 μL of EDTA-anticoagulated blood. Every mini-dbPCR-NALFIA run includes controls, which are a P. falciparum -infected EDTA blood sample and a Plasmodium -negative EDTA blood sample. As a first step, the samples were lysed at 98 °C for 10 min on the mini16 thermal cycler (miniPCR bio, Massachusetts, USA), a miniature endpoint PCR device (dimensions: 5 × 13 × 10 cm, weight: 0.5 kg) which can also be used for heat block protocols. The miniPCR smartphone application was used to programme the lysis protocol on the mini16 device through Bluetooth connection. After the lysis of the EDTA blood templates, 22.5 μL of the dbPCR reagent mix was added to each (total reaction volume 25 μL). The dbPCR was also run on the mini16 thermal cycler. Its protocol consisted of an initial activation step of 1 min at 98 °C, followed by 10 cycles of 5 s at 98 °C, 15 s at 61 °C and 30 s at 72 °C; next, 28 cycles of 5 s at 98 °C, 15 s at 58 °C and 30 s at 72 °C; and a final extension step of 72 °C for 2 min. Read-out with NALFIA Read-out of the results was done with NALFIA (Abingdon Health, York, UK). The test strip consists of a sample absorption pad, a conjugate pad with neutravidin-labelled carbon binding to the amplicons’ biotin label, and a nitrocellulose membrane coated with anti-digoxigenin (Dig) and anti-fluorescein isothiocyanate (FITC) antibody lines detecting and visualizing the amplicon-carbon complex. A third line on the membrane functions as a flow control (Fig. ). After completion of the dbPCR run on the mini16, a NALFIA strip was placed in a tube with 10 μL of dbPCR product and 140 μL running buffer. After a 10 min incubation, the NALFIA results were read out. When the first line directed against the Dig-labelled pan- Plasmodium amplicon was positive, it indicated the presence of Plasmodium infection. If the second anti-FITC test line for the fluorescein amidite (FAM)-labelled P. falciparum amplicon was also positive, the sample was infected specifically with P. falciparum (or a mixed infection including P. falciparum ). A sample with a positive pan- Plasmodium line and an absent P. falciparum line was classified positive for a non- falciparum malaria species, i.e. Plasmodium vivax, Plasmodium malariae, Plasmodium ovale or Plasmodium knowlesi . When only the P. falciparum line was visible, this result w as interpreted to be positive for this species. A NALFIA test was considered invalid when the flow control line was absent. Laboratory evaluation Limit of detection The limit of detection (LoD) for the pan- Plasmodium and P. falciparum targets was determined by testing 23 aliquots of a tenfold dilution series of a FCR3 ring-stage P. falciparum culture. The parasite density of the culture was determined by light microscopy. Dilutions were made in Plasmodium -negative EDTA blood from the Dutch blood bank. Tested parasite densities ranged from 1000 to 0.1 p/μL. LoD was defined as the lowest parasite density that was detected with 90% confidence (≥ 21 of 23 runs). Sensitivity and specificity To determine the laboratory sensitivity and specificity of the mini-dbPCR-NALFIA, a set of 87 blood specimen was tested, including samples from returned Dutch travellers with suspected malaria infection, Dutch blood donors, and intensive care unit patients from the Academic Medical Centre (Amsterdam, the Netherlands). All samples were derived from a pre-established Biobank at the Laboratory for Experimental Parasitology at the Academic Medical Centre. Both blood donors and intensive care unit patients did not travel to malaria-endemic areas in the 6 months before blood collection. The malaria status of all samples had been determined previously using the Alethia Malaria assay (Meridian Bioscience, Cincinnati, USA), a highly sensitive LAMP-based method for diagnosing malaria in non-endemic settings with a detection limit of 2 p/µL for P. falciparum and 0.1 p/µL for P. vivax . For samples with a positive Alethia result (n = 29, all returned travellers), the infecting Plasmodium species had been determined with expert microscopy. This set included 23 P. falciparum , 3 P. vivax , 2 P. ovale and 1 P. malariae infections. The P. falciparum samples had been quantified microscopically and ranged from 10 6 to 10 2 p/μL; the parasite counts of the non- falciparum malaria samples had not been determined at the time of microscopic examination. The 58 Plasmodium -negative samples comprised 19 samples from the Dutch blood donors, 16 samples from intensive care unit patients and 23 samples from malaria-suspected returned travellers with a negative Alethia diagnosis. The operator that tested all samples with mini-dbPCR-NALFIA was blinded to the reference test outcomes. Accordance and concordance Accordance and concordance are measures to express, respectively, the repeatability (intra-operator variability) and reproducibility (inter-operator variability) of qualitative tests . To evaluate the accordance and concordance of the mini-dbPCR-NALFIA, a single individual prepared 8 aliquots of a dilution series of FCR3 ring-stage P. falciparum culture and five Plasmodium -negative blood samples. For the accordance assessment, one operator tested three sets of aliquots with mini-dbPCR-NALFIA on three consecutive days, using the same equipment and dbPCR reagent batch numbers. To determine the concordance of the mini-dbPCR-NALFIA, five different operators from the same laboratory each tested a set of sample aliquots once. All five operators were blinded to the nature of the samples and used the same equipment and dbPCR reagent batch numbers. Statistical analysis Sensitivity and specificity were calculated for the pan- Plasmodium target, the P. falciparum target and the overall assay. The Clopper-Pearson Exact method was used to calculate the 95% confidence interval (CI) of the sensitivity and specificity. Accordance and concordance were calculated in random framework, using the formulae proposed by Van der Voet and Van Raamsdonk (2004): \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${ACC}_{random}=\frac{1}{L}{\sum }_{i}({{p}_{0,i}}^{2}+{{p}_{1,i}}^{2}+{{p}_{2,i}}^{2}{+{p}_{3,i}}^{2})$$\end{document} ACC random = 1 L ∑ i ( p 0 , i 2 + p 1 , i 2 + p 2 , i 2 + p 3 , i 2 ) , where L represents the number of tested samples, p 0 the proportion of negative results, p 1 the proportion of pan- Plasmodium single positive results (i.e. only the pan line), p 2 the proportion of P. falciparum single positive results (i.e. only the P. falciparum line) and p 3 the proportion of double positive results (i.e. both pan and P. falciparum lines), for a particular sample i. For the random concordance, the following formula was used: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${CON}_{random}={{P}_{0,i}}^{2}+{{P}_{1,i}}^{2}+{{P}_{2,i}}^{2}+{{P}_{3,i}}^{2}$$\end{document} CON random = P 0 , i 2 + P 1 , i 2 + P 2 , i 2 + P 3 , i 2 , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${P}_{0,i}=\frac{1}{L}{\sum }_{i}^{L}{p}_{0,i}$$\end{document} P 0 , i = 1 L ∑ i L p 0 , i , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${P}_{1,i}=\frac{1}{L}{\sum }_{i}^{L}{p}_{1,i}$$\end{document} P 1 , i = 1 L ∑ i L p 1 , i , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${P}_{2,i}=\frac{1}{L}{\sum }_{i}^{L}{p}_{2,i}$$\end{document} P 2 , i = 1 L ∑ i L p 2 , i and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${P}_{3,i}=\frac{1}{L}{\sum }_{i}^{L}{p}_{3,i}$$\end{document} P 3 , i = 1 L ∑ i L p 3 , i . Here, L represents the number of different operators, and p 0,i , p 1,i , p 2,i and p 3,i represent the proportion of negative, pan single positive, P. falciparum single positive and double positive results for a particular operator i . The 95% CI of the accordance and concordance estimates was calculated by means of bootstrapping . The dbPCR is a duplex reaction targeting two regions in the Plasmodium 18S rRNA gene: one that is highly conserved in the genus Plasmodium (the pan- Plasmodium target), and a second that is specific for P. falciparum . By using 5’-labelled primer pairs (Eurogentec, Liège, Belgium) previously described in literature, both target amplicons will carry a biotin label and a target-specific label (Table ) . The dbPCR reagent mix consists of 10 μL of 2× Phusion Blood PCR buffer (Thermo Fisher Scientific, Waltham, MA, USA), 0.1 μL of Phire Hot Start II DNA polymerase (Thermo Fisher Scientific), labelled primers and sterile water to make a total volume of 22.5 μL per sample. The template format for the dbPCR is 2.5 μL of EDTA-anticoagulated blood. Every mini-dbPCR-NALFIA run includes controls, which are a P. falciparum -infected EDTA blood sample and a Plasmodium -negative EDTA blood sample. As a first step, the samples were lysed at 98 °C for 10 min on the mini16 thermal cycler (miniPCR bio, Massachusetts, USA), a miniature endpoint PCR device (dimensions: 5 × 13 × 10 cm, weight: 0.5 kg) which can also be used for heat block protocols. The miniPCR smartphone application was used to programme the lysis protocol on the mini16 device through Bluetooth connection. After the lysis of the EDTA blood templates, 22.5 μL of the dbPCR reagent mix was added to each (total reaction volume 25 μL). The dbPCR was also run on the mini16 thermal cycler. Its protocol consisted of an initial activation step of 1 min at 98 °C, followed by 10 cycles of 5 s at 98 °C, 15 s at 61 °C and 30 s at 72 °C; next, 28 cycles of 5 s at 98 °C, 15 s at 58 °C and 30 s at 72 °C; and a final extension step of 72 °C for 2 min. Read-out of the results was done with NALFIA (Abingdon Health, York, UK). The test strip consists of a sample absorption pad, a conjugate pad with neutravidin-labelled carbon binding to the amplicons’ biotin label, and a nitrocellulose membrane coated with anti-digoxigenin (Dig) and anti-fluorescein isothiocyanate (FITC) antibody lines detecting and visualizing the amplicon-carbon complex. A third line on the membrane functions as a flow control (Fig. ). After completion of the dbPCR run on the mini16, a NALFIA strip was placed in a tube with 10 μL of dbPCR product and 140 μL running buffer. After a 10 min incubation, the NALFIA results were read out. When the first line directed against the Dig-labelled pan- Plasmodium amplicon was positive, it indicated the presence of Plasmodium infection. If the second anti-FITC test line for the fluorescein amidite (FAM)-labelled P. falciparum amplicon was also positive, the sample was infected specifically with P. falciparum (or a mixed infection including P. falciparum ). A sample with a positive pan- Plasmodium line and an absent P. falciparum line was classified positive for a non- falciparum malaria species, i.e. Plasmodium vivax, Plasmodium malariae, Plasmodium ovale or Plasmodium knowlesi . When only the P. falciparum line was visible, this result w as interpreted to be positive for this species. A NALFIA test was considered invalid when the flow control line was absent. Limit of detection The limit of detection (LoD) for the pan- Plasmodium and P. falciparum targets was determined by testing 23 aliquots of a tenfold dilution series of a FCR3 ring-stage P. falciparum culture. The parasite density of the culture was determined by light microscopy. Dilutions were made in Plasmodium -negative EDTA blood from the Dutch blood bank. Tested parasite densities ranged from 1000 to 0.1 p/μL. LoD was defined as the lowest parasite density that was detected with 90% confidence (≥ 21 of 23 runs). Sensitivity and specificity To determine the laboratory sensitivity and specificity of the mini-dbPCR-NALFIA, a set of 87 blood specimen was tested, including samples from returned Dutch travellers with suspected malaria infection, Dutch blood donors, and intensive care unit patients from the Academic Medical Centre (Amsterdam, the Netherlands). All samples were derived from a pre-established Biobank at the Laboratory for Experimental Parasitology at the Academic Medical Centre. Both blood donors and intensive care unit patients did not travel to malaria-endemic areas in the 6 months before blood collection. The malaria status of all samples had been determined previously using the Alethia Malaria assay (Meridian Bioscience, Cincinnati, USA), a highly sensitive LAMP-based method for diagnosing malaria in non-endemic settings with a detection limit of 2 p/µL for P. falciparum and 0.1 p/µL for P. vivax . For samples with a positive Alethia result (n = 29, all returned travellers), the infecting Plasmodium species had been determined with expert microscopy. This set included 23 P. falciparum , 3 P. vivax , 2 P. ovale and 1 P. malariae infections. The P. falciparum samples had been quantified microscopically and ranged from 10 6 to 10 2 p/μL; the parasite counts of the non- falciparum malaria samples had not been determined at the time of microscopic examination. The 58 Plasmodium -negative samples comprised 19 samples from the Dutch blood donors, 16 samples from intensive care unit patients and 23 samples from malaria-suspected returned travellers with a negative Alethia diagnosis. The operator that tested all samples with mini-dbPCR-NALFIA was blinded to the reference test outcomes. Accordance and concordance Accordance and concordance are measures to express, respectively, the repeatability (intra-operator variability) and reproducibility (inter-operator variability) of qualitative tests . To evaluate the accordance and concordance of the mini-dbPCR-NALFIA, a single individual prepared 8 aliquots of a dilution series of FCR3 ring-stage P. falciparum culture and five Plasmodium -negative blood samples. For the accordance assessment, one operator tested three sets of aliquots with mini-dbPCR-NALFIA on three consecutive days, using the same equipment and dbPCR reagent batch numbers. To determine the concordance of the mini-dbPCR-NALFIA, five different operators from the same laboratory each tested a set of sample aliquots once. All five operators were blinded to the nature of the samples and used the same equipment and dbPCR reagent batch numbers. The limit of detection (LoD) for the pan- Plasmodium and P. falciparum targets was determined by testing 23 aliquots of a tenfold dilution series of a FCR3 ring-stage P. falciparum culture. The parasite density of the culture was determined by light microscopy. Dilutions were made in Plasmodium -negative EDTA blood from the Dutch blood bank. Tested parasite densities ranged from 1000 to 0.1 p/μL. LoD was defined as the lowest parasite density that was detected with 90% confidence (≥ 21 of 23 runs). To determine the laboratory sensitivity and specificity of the mini-dbPCR-NALFIA, a set of 87 blood specimen was tested, including samples from returned Dutch travellers with suspected malaria infection, Dutch blood donors, and intensive care unit patients from the Academic Medical Centre (Amsterdam, the Netherlands). All samples were derived from a pre-established Biobank at the Laboratory for Experimental Parasitology at the Academic Medical Centre. Both blood donors and intensive care unit patients did not travel to malaria-endemic areas in the 6 months before blood collection. The malaria status of all samples had been determined previously using the Alethia Malaria assay (Meridian Bioscience, Cincinnati, USA), a highly sensitive LAMP-based method for diagnosing malaria in non-endemic settings with a detection limit of 2 p/µL for P. falciparum and 0.1 p/µL for P. vivax . For samples with a positive Alethia result (n = 29, all returned travellers), the infecting Plasmodium species had been determined with expert microscopy. This set included 23 P. falciparum , 3 P. vivax , 2 P. ovale and 1 P. malariae infections. The P. falciparum samples had been quantified microscopically and ranged from 10 6 to 10 2 p/μL; the parasite counts of the non- falciparum malaria samples had not been determined at the time of microscopic examination. The 58 Plasmodium -negative samples comprised 19 samples from the Dutch blood donors, 16 samples from intensive care unit patients and 23 samples from malaria-suspected returned travellers with a negative Alethia diagnosis. The operator that tested all samples with mini-dbPCR-NALFIA was blinded to the reference test outcomes. Accordance and concordance are measures to express, respectively, the repeatability (intra-operator variability) and reproducibility (inter-operator variability) of qualitative tests . To evaluate the accordance and concordance of the mini-dbPCR-NALFIA, a single individual prepared 8 aliquots of a dilution series of FCR3 ring-stage P. falciparum culture and five Plasmodium -negative blood samples. For the accordance assessment, one operator tested three sets of aliquots with mini-dbPCR-NALFIA on three consecutive days, using the same equipment and dbPCR reagent batch numbers. To determine the concordance of the mini-dbPCR-NALFIA, five different operators from the same laboratory each tested a set of sample aliquots once. All five operators were blinded to the nature of the samples and used the same equipment and dbPCR reagent batch numbers. Sensitivity and specificity were calculated for the pan- Plasmodium target, the P. falciparum target and the overall assay. The Clopper-Pearson Exact method was used to calculate the 95% confidence interval (CI) of the sensitivity and specificity. Accordance and concordance were calculated in random framework, using the formulae proposed by Van der Voet and Van Raamsdonk (2004): \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${ACC}_{random}=\frac{1}{L}{\sum }_{i}({{p}_{0,i}}^{2}+{{p}_{1,i}}^{2}+{{p}_{2,i}}^{2}{+{p}_{3,i}}^{2})$$\end{document} ACC random = 1 L ∑ i ( p 0 , i 2 + p 1 , i 2 + p 2 , i 2 + p 3 , i 2 ) , where L represents the number of tested samples, p 0 the proportion of negative results, p 1 the proportion of pan- Plasmodium single positive results (i.e. only the pan line), p 2 the proportion of P. falciparum single positive results (i.e. only the P. falciparum line) and p 3 the proportion of double positive results (i.e. both pan and P. falciparum lines), for a particular sample i. For the random concordance, the following formula was used: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${CON}_{random}={{P}_{0,i}}^{2}+{{P}_{1,i}}^{2}+{{P}_{2,i}}^{2}+{{P}_{3,i}}^{2}$$\end{document} CON random = P 0 , i 2 + P 1 , i 2 + P 2 , i 2 + P 3 , i 2 , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${P}_{0,i}=\frac{1}{L}{\sum }_{i}^{L}{p}_{0,i}$$\end{document} P 0 , i = 1 L ∑ i L p 0 , i , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${P}_{1,i}=\frac{1}{L}{\sum }_{i}^{L}{p}_{1,i}$$\end{document} P 1 , i = 1 L ∑ i L p 1 , i , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${P}_{2,i}=\frac{1}{L}{\sum }_{i}^{L}{p}_{2,i}$$\end{document} P 2 , i = 1 L ∑ i L p 2 , i and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${P}_{3,i}=\frac{1}{L}{\sum }_{i}^{L}{p}_{3,i}$$\end{document} P 3 , i = 1 L ∑ i L p 3 , i . Here, L represents the number of different operators, and p 0,i , p 1,i , p 2,i and p 3,i represent the proportion of negative, pan single positive, P. falciparum single positive and double positive results for a particular operator i . The 95% CI of the accordance and concordance estimates was calculated by means of bootstrapping . Limit of detection The results of the P. falciparum culture dilution series testing are displayed in Table . At a confidence level of 90%, LoD was determined to be 100 p/µL for the pan- Plasmodium test line and 10 p/µL for the P. falciparum line. Sensitivity and specificity Of the 29 Plasmodium samples, 28 tested positive for the pan- Plasmodium line in the mini-dbPCR-NALFIA, while 1 P. vivax sample was false-negative for this line. All 23 P. falciparum samples also showed the P. falciparum test line. 57 Plasmodium -negative blood samples were negative for both test lines with mini-dbPCR-NALFIA; 1 sample from a Dutch blood donor was false-positive for the P. falciparum line. None of the test samples had an invalid NALFIA result. This resulted in a sensitivity of 96.6% (95% CI, 82.2%–99.9%) and a specificity of 100% (95% CI, 93.8%–100%) for the pan- Plasmodium line. The sensitivity of the P. falciparum test line was calculated to be 100% (95% CI, 85.2%–100%), and its specificity 98.4% (95% CI, 91.6%–100%) . When the results of the two NALFIA test lines were combined, there were three possible outcomes: a non- falciparum infection, a P. falciparum infection and Plasmodium -negative. This approach resulted in an overall sensitivity of 96.6% (95% CI, 82.2%–99.9%) and specificity of 98.3% (95% CI, 90.8%–100%) of the mini-dbPCR-NALFIA. Accordance and concordance An overview of the accordance test results for the mini-dbPCR-NALFIA is shown in Table . The overall accordance of all tested samples in a random framework was 93.7% (95% CI, 89.5%–97.8%). Table summarizes the test results for the five different operators of the mini-dbPCR-NALFIA. Based on these data, the random concordance was calculated to be 84.6% (95% CI, 79.5%–89.6%). The results of the P. falciparum culture dilution series testing are displayed in Table . At a confidence level of 90%, LoD was determined to be 100 p/µL for the pan- Plasmodium test line and 10 p/µL for the P. falciparum line. Of the 29 Plasmodium samples, 28 tested positive for the pan- Plasmodium line in the mini-dbPCR-NALFIA, while 1 P. vivax sample was false-negative for this line. All 23 P. falciparum samples also showed the P. falciparum test line. 57 Plasmodium -negative blood samples were negative for both test lines with mini-dbPCR-NALFIA; 1 sample from a Dutch blood donor was false-positive for the P. falciparum line. None of the test samples had an invalid NALFIA result. This resulted in a sensitivity of 96.6% (95% CI, 82.2%–99.9%) and a specificity of 100% (95% CI, 93.8%–100%) for the pan- Plasmodium line. The sensitivity of the P. falciparum test line was calculated to be 100% (95% CI, 85.2%–100%), and its specificity 98.4% (95% CI, 91.6%–100%) . When the results of the two NALFIA test lines were combined, there were three possible outcomes: a non- falciparum infection, a P. falciparum infection and Plasmodium -negative. This approach resulted in an overall sensitivity of 96.6% (95% CI, 82.2%–99.9%) and specificity of 98.3% (95% CI, 90.8%–100%) of the mini-dbPCR-NALFIA. An overview of the accordance test results for the mini-dbPCR-NALFIA is shown in Table . The overall accordance of all tested samples in a random framework was 93.7% (95% CI, 89.5%–97.8%). Table summarizes the test results for the five different operators of the mini-dbPCR-NALFIA. Based on these data, the random concordance was calculated to be 84.6% (95% CI, 79.5%–89.6%). This study demonstrates that the mini-dbPCR-NALFIA is a robust, highly sensitive and specific tool for molecular diagnosis of malaria. It has a simpler workflow than traditional NAATs and requires much less resources. By incorporating the mini16 as portable, battery-powered thermal cycler, the mini-dbPCR-NALFIA can be used even in remote healthcare settings without an extensive laboratory infrastructure or stable power supply. With an excellent overall sensitivity of 96.6% and specificity of 98.3%, the diagnostic accuracy of the mini-dbPCR-NALFIA is similar to that of traditional molecular techniques for malaria diagnosis, such as conventional PCR, qPCR and nested PCR . One P. vivax sample gave a false-negative result. This may have been due to a low parasite density, which is common in P. vivax infections . Unfortunately, whether this was indeed the case for this sample was unknown, as its parasitaemia had not been determined with microscopy at the time of diagnosis. Also, this particular sample had been in − 20 °C storage for 2 years, which may have affected the DNA integrity. The occasional false-positive result in one Plasmodium -negative sample could have been the result of carry-over contamination from a Plasmodium -positive sample during the preparation of the dbPCR or NALFIA. The LoDs of 100 p/μL for the pan- Plasmodium line and 10 p/μL for the P. falciparum line demonstrate the high sensitivity of the mini-dbPCR-NALFIA for low falciparum parasite densities. Although the LoD of extremely sensitive nested and qPCR techniques can go as low as 0.1 p/μL , most importantly, the mini-dbPCR-NALFIA is still significantly more sensitive for low falciparum parasitaemias than light microscopy and RDTs, which generally fail to detect infections below 50 to 200 p/µL . As such, the assay will be able to diagnose the majority of symptomatic malaria patients in an endemic setting, who often present with a parasitaemia above 1000 p/μL . On top of that, mini-dbPCR-NALFIA could potentially be used for screening and detection of asymptomatic falciparum cases with sub-microscopic infections . As no quantified non- falciparum samples were available for this study, additional evaluation of the LoD of the mini-dbPCR-NALFIA for these other Plasmodium species is warranted. When analysing a P. falciparum blood dilution series and five malaria-negative blood samples, the mini-PCR-NALFIA showed a high accordance of 93.7%, demonstrating the robustness of the method. Discordant results were mainly observed for parasite densities < 10 p/μL, which are close to the LoD of the test. At such low Plasmodium DNA concentrations, stochastic variations tend to have a more prominent influence on the assay’s outcome. This phenomenon was also believed to be the main reason for the concordance being 84.6%. The laboratory experience of the different operators in the concordance evaluation ranged from basic to proficient. They were only given written and oral instructions, which was sufficient for them to correctly perform the mini-dbPCR-NALFIA. This observation underlined its simplicity and user-friendliness. Compared to other molecular methods for malaria diagnosis, mini-dbPCR-NALFIA shares some characteristics with LAMP, which also has a simplified protocol with easy read-out and high accuracy for diagnosing malaria, including low density falciparum infections . However, LAMP currently has no multiplex capability and, therefore, cannot differentiate Plasmodium species in one reaction. This issue is not encountered with mini-dbPCR-NALFIA, a duplex assay that can distinguish falciparum malaria from infections with other Plasmodium species. To further evaluate the performance of the mini-dbPCR-NALFIA for diagnosis of (submicroscopic) infections with P. vivax, P. malariae and P. ovale ., additional research is required, since this study tested only a limited number of non- falciparum malaria blood samples. The adaptation of the assay described by Roth et al. to operate on a portable, battery-powered mini16 thermal cycler has made it possible to run the dbPCR in harsh, resource-limited conditions of sub-Saharan Africa. Implementation in such settings is also supported by the stability of the dbPCR reagents, which did not show loss of performance after storage at 4 °C for 9 months . Another strength of the mini-dbPCR-NALFIA is its affordability: the testing costs per sample are economical (0.30 USD for the dbPCR reagents, 2.80 USD per NALFIA test) and introduction of the mini16 greatly reduces the cost of the required equipment (800 USD per device). A planned economic evaluation will assess the cost-effectiveness of the mini-dbPCR-NALFIA in different endemic areas, compared to currently implemented malaria point-of-care diagnostics. A limitation of the current mini-dbPCR-NALFIA is its inability to differentiate between the non- falciparum malaria species and identify mixed infections. Although the vast majority of malaria cases in Africa is caused by P. falciparum , the relative contribution of P. vivax , P. malariae and P. ovale infections in this region appears to be increasing . Fortunately, the mini-dbPCR-NALFIA has a flexible design: an alternative format is currently under development, which will have a P. falciparum and a P. vivax test line. In the same way, the mini-dbPCR-NALFIA also has the potential to be modified to detect other blood-borne pathogens. In areas with high malaria transmission, the mini-dbPCR-NALFIA could be a valuable alternative to RDTs, which are likely to suffer from false-positive results due to P. falciparum HRP2 antigen persistence in the blood after clearance of the parasites . Nevertheless, it is possible that a similar issue may arise for molecular diagnostic techniques: there have been a number of studies showing that PCR-based detection of Plasmodium DNA in blood can remain positive up to seven weeks after curative malaria treatment . This could either be caused by residual circulating DNA fragments or by a small subset of parasites with extended survival. Although this phenomenon could have its implications for the specificity of the mini-dbPCR-NALFIA, its relevance for the application of the assay as a field diagnostic remains a subject of further study. The mini-dbPCR-NALFIA is an easy-to-use method for sensitive and specific diagnosis of malaria. Compared to other simplified molecular diagnostics, it has the advantages that there is no need of prior sample processing and that differentiation of P. falciparum and non- falciparum infections is possible thanks to its duplex format. A handheld miniature thermal cycler makes the assay well-adapted to resource-poor conditions in malaria endemic regions. The high diagnostic accuracy and low LoD of the mini-dbPCR-NALFIA could make it a valuable tool in many malaria control programmes, especially for detection of asymptomatic and low-density cases in near-elimination areas. A phase-3 field trial is currently being conducted to evaluate the potential of the mini-dbPCR-NALFIA in different epidemiological settings. |
Engaging Parents in Sexually Explicit Media Literacy Education: Expert Perspectives From Australia and New Zealand | 7e281849-67bd-4928-b26e-d0ff53950ded | 11876791 | Health Literacy[mh] | Introduction Australian young people receive education about sex and relationships from a range of sources, including parents, families, health professionals, and online and traditional media, in addition to formal school‐based lessons . The media, including sexually explicit media (SEM) and pornography, can play an educative role for young people in Australia who find traditional school‐based relationships and sexuality education (RSE) do not meet their needs . Embarrassment has been identified as a key reason for young people venturing online to access information anonymously , rather than discussing their queries with their parents . Young people view pornography frequently and from early ages. One Australian study reports 44% of those aged nine to 16 have seen sexual imagery . More recent Australian data found 75% of 16–18‐year‐olds ( n = 1004) had viewed pornography, with 13 being the reported average age of first viewing . Most recently, among Australian young people aged 15–20, 86% of males and 69% of females had viewed pornography . These studies are consistent with data from a nationally representative survey in New Zealand where 75% of participants aged 14–17 had viewed pornography before turning 17, with 27% having done so by the age of 12 . Prepubescent children are more likely to unintentionally encounter pornography and ignore it, rather than seek support . In contrast, adolescents report finding pornography useful for learning how to have sex and for general sexual exploration or pleasure . Intentional seeking is increasingly likely throughout these years , particularly in adolescent males . Australian data suggest that young people are viewing pornography as early as 3.2 years prior to their first sexual experience with another person . Literature exploring the link between pornography viewing and adolescent sexual behaviour yields varied results . SEM can, however, play a role in developing attitudes towards sex and relationships, with evidence supporting positive influences and experiences ; as well as correlations with attitudes consistent with sexual violence and harmful sexual experiences . Pornography literacy, underpinned by media literacy theory, is an approach that aims to develop critical appraisal skills and educate about the potential negative impacts of viewing pornography, as a harm reduction strategy . Such programs include information that challenges myths about how realistic pornography is, how it is produced and distributed, and associated legal considerations . Young people support the inclusion of information that builds critical analysis skills towards body image representations, sexual violence, and representations of diverse communities . Existing SEM and pornography literacy programs have been found to be feasible and, importantly, trustworthy by young people . Australian young people also report wanting pornography‐specific education that respects their desire to watch if they choose to . In Australia, RSE content is set within the national Health and Physical Education curriculum, which can be adapted by individual states or territories, with schools making further adjustments to local needs during implementation. There is little curricular guidance provided to schools and educators around how or if to teach SEM literacy, with pornography only mentioned as an optional elaboration amongst media literacy content . Therefore, while a comprehensive RSE curriculum allows for the inclusion of SEM and pornography literacy, delivery in Australia can be inconsistent across jurisdictions . Parents are recognised as essential partners in the delivery of RSE as part of a whole‐of‐school approach including the school curriculum, parental, family, and community engagement and school policy . Along with school‐based programs, an Australian domestic violence prevention organisation, Our Watch, recommends the provision of information and resources for parents to aid them in conversations with young people to support critical analysis skills for pornography . Research with Australian and New Zealand parents has found them to have varied confidence and comfort when discussing pornography with their children, with conversations occurring infrequently . Parents may also underestimate how much young people are viewing pornography . While literature closely examining parental perspectives towards SEM literacy education is still growing, studies indicate that parents prefer discussions and open dialogue with their children about the material they consume rather than a sole focus on restricting their access . Australian parents do not believe they have sufficient knowledge of school‐based RSE and believe schools could have a greater role in providing pornography education within the current curriculum . New Zealand parents report wanting information from experts about how to support this education at home . This paper outlines a thematic analysis of semi‐structured interviews conducted with sexual health experts, exploring their experiences engaging with parents on the topic of SEM. Interviews explored experts' perceptions of parental comfort with the topic, experiences of barriers and enablers and insights to improve sexual health education for parents supporting their children to navigate SEM's influence. Interviews formed part of a broader scoping review assessing parental perspectives towards SEM literacy education and available online resources, enabling deeper exploration of the topic . This study's scope included pornography, as well as highly sexualised imagery in various media, acknowledging all media's potential influence over consumers' sexuality attitudes and current literature adopting similar approaches . The scoping review and these purposively selected interviews constituted an effective method to collate existing resources and the experiences of providers of education to parents. Method A scoping review following Arksey and O'Malley's original framework was conducted, with an additional sixth step incorporating expert insights . The scoping review encompassing the whole framework, including data from the expert consultation interviews in tabulated format, has been published separately . This paper outlines a thematic analysis of the interviews conducted in the framework's final stage. 2.1 Recruitment Stakeholders identified throughout the scoping review, as providers of SEM education or resources for parents, were emailed inviting them to participate in an interview. Those wanting to participate were emailed an information sheet and provided consent via an online form. A second email was sent within four weeks to those who did not respond. The Curtin University Human Research Ethics Committee approved the study (HRE2022‐0191). 2.2 Interviews Semi‐structured interviews were conducted online, with an interview schedule and lasted approximately 45 min. Interviews took place between January and April 2023. Discussion allowed for more in‐depth detail about the resources/programmes than what was available online, including how the resources/programmes were developed and/or evaluated. Importantly, the interviews explored experts' experiences engaging with parents, including challenges they had experienced and effective strategies in overcoming these to support parents as key educators for their children. 2.3 Data Analysis Interviews were recorded and transcribed using Otter, an online transcription tool. Transcribed data was entered into the NVivo software to undergo a thematic analysis guided by Braun and Clarke's six phases . Data familiarisation and inductive coding was conducted by the lead author. Semantic coding was used to analyse statements directly as communicated by participants, and latent coding allowed for a deeper analysis of statements as interpreted by the authors . Where appropriate, some statements were coded both semantically and latently. The lead author maintained a field journal documenting participant reactions and tone, and personal reflections, which aided latent coding . Themes and sub‐themes were generated through discussion and confirmation with the second and third authors to enhance the robustness of the analysis . A map was generated to further refine codes, to help define and name themes and sub‐themes, presented in Figure . Recruitment Stakeholders identified throughout the scoping review, as providers of SEM education or resources for parents, were emailed inviting them to participate in an interview. Those wanting to participate were emailed an information sheet and provided consent via an online form. A second email was sent within four weeks to those who did not respond. The Curtin University Human Research Ethics Committee approved the study (HRE2022‐0191). Interviews Semi‐structured interviews were conducted online, with an interview schedule and lasted approximately 45 min. Interviews took place between January and April 2023. Discussion allowed for more in‐depth detail about the resources/programmes than what was available online, including how the resources/programmes were developed and/or evaluated. Importantly, the interviews explored experts' experiences engaging with parents, including challenges they had experienced and effective strategies in overcoming these to support parents as key educators for their children. Data Analysis Interviews were recorded and transcribed using Otter, an online transcription tool. Transcribed data was entered into the NVivo software to undergo a thematic analysis guided by Braun and Clarke's six phases . Data familiarisation and inductive coding was conducted by the lead author. Semantic coding was used to analyse statements directly as communicated by participants, and latent coding allowed for a deeper analysis of statements as interpreted by the authors . Where appropriate, some statements were coded both semantically and latently. The lead author maintained a field journal documenting participant reactions and tone, and personal reflections, which aided latent coding . Themes and sub‐themes were generated through discussion and confirmation with the second and third authors to enhance the robustness of the analysis . A map was generated to further refine codes, to help define and name themes and sub‐themes, presented in Figure . Results Nine responses were received from the 23 organisations approached. One organisation declined, and another opted to provide written responses to the interview questions. Written responses have not been included in this analysis. Seven interviews were conducted. Five participants were from Australia, and two from New Zealand. Most participants regularly gave presentations to parent groups, organised through community organisations or schools, with accompanying online resources to support parent learning. These presentations generally lasted an hour, were provided in person, with sporadic online delivery, and predominantly in metropolitan areas, though some experts provided insights from occasional regional engagements. Themes and sub‐themes from two overarching themes, content delivery and content development, have been described. 3.1 Content Delivery 3.1.1 Parent Perceptions of Content All participants stated that parental feedback on their education sessions was overwhelmingly positive. Two participants who deliver presentations to parents via school communities recalled school staff relaying positive feedback they received from parents. Most stated that parents were grateful to receive information about something they knew was important but did not feel equipped to address with their children or did not know where to go for support. Most participants reported gratitude from parents for practical information that supports them in discussing SEM with their children, such as conversation starters and take‐home resources. One participant stated this was specifically relevant for parents who were aware their children were watching pornography. This is supported by another participant's reflection that parents feel less scared once engaging with their content around young people and SEM. ‘It's giving them that confidence. They're happy with the knowledge, and it's dispelling a lot of their fears… that is the biggest feedback that I've got, is that they're not so terrified because most of the stuff out there is terrifying.’ (Participant One) Participants reported receiving little to no negative feedback from parents about their content. In fact, when asked about any criticisms received, one participant recalled receiving constructive criticism that their audience wanted more content on SEM and felt the school should be providing more SEM literacy content to students. Another participant acknowledged their parent education sessions are voluntary and therefore likely to attract a receptive audience, though this participant also notes that in their experience, pornography education may be less divisive than commonly thought, stating: ‘it cuts across left and right, conservative and liberal. …there'll be liberals who can't stand what I've got to say about it, they want individual choice… And then on the conservative side, there are absolutely conservatives who very much want us to be talking about this, because it's out of sync with values that they hold dearly. And then there are conservatives who think… ‘How dare you talk about that?’. So it's… a weird issue in that it cuts across all of those things.’ (Participant Five) Most participants reflected on being cognisant of the differing values and experiences of parents they engaged with when delivering content. One stated their audience was usually an ’even mix’ of parents who did and did not watch pornography. Another acknowledged similar challenges with parents who may ‘have their own trauma, or their own investment in porn use, or whatever it is, to bring them through feeling okay.’ (Participant Five) Also discussed were experts' experiences in being frequently asked about ethical porn as a potential means of conveying better messages to young people than mainstream pornography. ‘I get a lot of questions about ethical porn… they're like ‘hey, mainstream porn that you get at a click of a button is sending some potentially pretty problematic messages, so where do we go as an alternative?’ (Participant Seven) 3.1.2 Gender Differences A few participants reflected on the gender gap in their audiences, stating that their presentations were generally attended by mothers and mother figures, such as grandmothers or aunts. However, one participant did report fathers increasing in attendance in recent years. They attributed this to changing expectations of mothers and fathers. ‘There are some people that come in with that view that females are better… at talking to their girls and fathers are better off talking to their boys. But I'd say in more recent years, I don't see that as much as I did.’ (Participant Seven) This participant reflected on the intersection between gendered parent expectations and culture, specifically referencing their experiences providing online presentations to Middle Eastern countries. In these scenarios, they experienced challenges engaging fathers in communities ‘where the father figures do not talk about that’ as it was deemed either the mother's role, or not appropriate to discuss at all. 3.1.3 Lack of Awareness There was consensus amongst all participants that their parent audiences were generally unaware of the extent of young people's SEM access, how easy SEM is to access, and the type of material available. For example, participant five stated: ‘they often just have no idea, for example, that 46% of videos includes incest themes, or… that one in eight porn video titles describe behaviour that constitutes sexual violence and that level of gender aggression… and problematic messages, is quite often news to a lot of parents, particularly female parents.’ (Participant Five) The potential impacts of viewing pornography were frequently reported as new information for many parents. Some participants felt not all parents were ready to engage with their content because they believed their children were not watching pornography. Most participants reported parents' lack of sexuality education contributed to feelings of overwhelm, for example: ‘I actually have to start very basically. Parents have never had any sex or sexuality education in their life ever, hardly’ (Participant Two) 3.1.4 Maintaining Innocence Participants frequently cited parents not wanting to introduce the topic of SEM to their children too soon as a key reason for some parents not engaging with their content. This was particularly the case with parents of younger children. A specific concern was that proactive discussions with their children about SEM would make them seek pornography. ‘Another factor that definitely makes them pause is a fear of introducing something too soon. Yeah. I would say that… for parents of primary school aged children.’ (Participant Seven) ‘the biggest barrier for talking about porn is the fear that they're going to harm their child, or their child is going to start watching it because of the conversation.’ (Participant One) Different responses from parents based on the ages of their children were commonly discussed by participants as impacting education session attendance and/or how willing school communities were to promote the sessions due to anticipated parental backlash. ‘it's (the education session) not getting as much traction as we thought. And I think the reason is, because we've used the word pornography, and we're aiming it for parents of primary aged children.’ (Participant Four) ‘I think that the fear… is a significant factor for schools engaging with this issue, they're worried about parental response.’ (Participant Five) 3.2 Engaging Parents—Barriers and Enablers 3.2.1 Difficulty Accessing Certain Parent Groups Most participants reported challenges accessing particular parent groups, such as those in rural and remote areas, culturally and linguistically diverse groups, and those of Aboriginal and/or Torres Strait Islander descent. All discussed recognising their audience's diversity, particularly when values and attitudes were underpinned by cultural or religious beliefs, and how this impacted whether parents chose to engage. Participant four expressed the following view about one cultural group: ‘And they decided no, that they didn't want their children involved. And the teacher said ‘oh my gosh, you know, it's such a shame, because… these children are not getting any discussion at home. They're curious, and they are the ones we're hearing at school who are accessing pornography.’ (Participant Four) 3.2.2 Confusion About Where to Access Support Two participants noted some parents being unaware of resources and services available, and the confusing landscape of online support. As a result, they felt parents relied on schools to provide this education, even though there was a lack of curricular support. Participant seven stated: ‘there are fantastic services, online resources online for young people and for families, but you don't know what you don't know. And what I found is lots of parents don't actually know where to go online, to get help and support’ (Participant Seven) ‘Although there's a huge amount of resources available today… there is still a lot of parents that are really feeling ill equipped to translate that into conversation and education with their children. A lot saying, ‘well, they're gonna get something at school, so I'll leave it to school.’ (Participant Seven) Another participant who also provides teacher training supported the incomplete nature of the Australian school curriculum, stating ‘it's completely inadequate, human sexuality is neglected in the Australian curriculum’ (Participant Two) In their opinion, this led to schools thinking they were providing comprehensive content, which the participant found to be incomplete due to a lack of guidance from the curriculum and regulation around what is delivered. 3.2.3 Internal Resourcing Two participants spoke about their resourcing challenges when engaging parents, citing insufficient time and funding to develop their resources and properly engage with parents. Most participants were from either small not‐for‐profit organisations or sole‐trader businesses, providing a mix of free and paid content. ‘So it's just me, so I don't have the time, resources to do as much as I'd like to, particularly in resource creation and development.’ (Participant Seven) 3.3 Effective Strategies for Engaging Parents 3.3.1 Variety of Formats A few participants spoke about delivering more recent workshops in an online format, which they felt parents found beneficial, particularly single parents who were time poor. Similarly, participants noted partnered parents liked to participate online together so they could ask each other questions during the session. Two participants offered a peer‐support platform to parents, which they found highly beneficial for peer interactions, though one did note it essential to provide parents the opportunity to ask questions confidentially. ‘There's real privacy around this stuff. And they're not often happy to talk in a way that they're identified.’ (Participant One) Most participants also discussed providing accompanying materials to their parent audiences to reinforce workshop content and support them to apply learnings. 3.3.2 Maintaining a Neutral Position on Pornography Due to the diversity in parent attitudes and experiences, most participants discussed the need to avoid a moral position on pornography and remain focused on delivering evidence‐based content. Participants reported experiences with parents rejecting content that presented a view contradicting their own values and experiences. This was discussed by participants who had experiences with parents with both conservative and progressive views of sexuality. In both scenarios, stigma and shame‐free messaging was deemed effective, for example: ‘I've had dads say they've been to other workshops around sexualized media where straight off… the bat, it's anti‐pornography and they have shut down. They said ‘I stopped listening because it was such an affront. I didn't like the way it was delivered on’ and I think we've got to be mindful of that as well.’ (Participant Seven) ‘There's no judging, I don't shame people for believing sex should only happen in marriage because (of) their values.’ (Participant One) One participant stated an anti‐pornography position was counter‐intuitive to what their content was trying to achieve in helping parents discuss and educate their children. They noted, ‘even if that's my view, someone else's view, that is not going to stop our kids seeing it, that's not helping us to have conversations.’ (Participant Seven) They continued to emphasise this point, elaborating the original purpose of their content being to educate. ‘I'm not presenting from one space, not anti‐pornography, not pro pornography. Just how can we have conversations, to equip our children with the skills and strategies they might need to navigate this.’ (Participant Seven) Another participant discussed encouraging parents to bring their own values to their content to help inform how they will approach the discussion with their children, with a focus on preparing and having the conversation. ‘looking at porn through an ethical lens… how does it sit with my views on gender and pleasure and consent and racism?… And I think that works really well with parents because they can draw it into their own family.’ (Participant Three) ‘how to… prepare for conversations, and some of the things to think about, how parents can work with their own… preconceived ideas (and) biases’ (Participant Three) 3.3.3 Creating a Safe Environment Participant two noted the importance of keeping people safe in an environment where they are learning about a topic which is both value‐laden and involving their children. They reflected on a similar skill set being required while providing education about the Human Immunodeficiency Virus (HIV) over the last two decades and highlighted the critical importance of SEM literacy sessions being delivered by well‐ trained experts. ‘Keeping people safe in a session like this is really important. And I've developed that skill over 18 years of educating, since back in the days of HIV when it first came out. So I think people feel really safe and cared for in that session.’ (Participant Two) 3.3.4 Focusing on General Media Literacy for Parents of Younger Children Most participants discussed challenges in engaging parents of younger children in discussions about pornography. They reported that focusing on general media literacy for younger, primary school‐aged children was beneficial in these scenarios, with a view to introducing sexual content at a later age deemed more appropriate. One participant stated: ‘media literacy is a really big foundational skill for me, because I think it's the biggest missing 21st century skill, we're just not doing it… And we can't just arrive at porn and start critiquing it, if we haven't taught people how to critique media they see in on social media and signs in advertising, in movies, whatever it is.’ (Participant Seven) Another supported this approach, and reflected on media literacy being included in school curricula where students are taught to critique television advertisements. This was seen as an essential skill before pornography literacy is introduced. ‘If we can build on these conversations… so that if they're watching a movie and a ghost comes out of a wall, they know that that's special effects… by the time they get to porn… they may actually already be cynical about what they're seeing.’ (Participant One) 3.4 Content Development 3.4.1 Parental Engagement Most participants discussed the critical importance of engaging parents in content development, which they felt did not happen enough, citing their perceptions that most current materials were youth‐targeted. Most felt a sole focus on educating young people about SEM, without adequately educating parents, is insufficient. Even with effective education, young people still need a parent guiding them. Another who delivers presentations to both parents and school students stated: ‘it's just inadequate to go into school, deliver content to children and walk out of the school without teaching the adults… if I had to spend money, it would be on the adults first… and that includes parents and teachers.’ (Participant Two) One participant cautioned being dismissive of parent fears in relation to school‐based education, highlighting the need to recognise parents' interest and concern, stating: ‘I don't dismiss parents' right to be concerned and have input into the frameworks with which their kids are being taught about this topic. Sometimes schools I think are like, ‘Oh, God, yeah annoying parents’, when it is actually appropriate for parents to care about what their children are learning on these issues’ (Participant Five) 3.5 Youth Engagement The majority of participants discussed the importance of engaging young people to inform parent resources, as well as their own practice as educators, to ensure their content remains current. One participant stated that young people frequently raised the issue of ethical pornography with them and already possessed the ability to critique and determine what was reality versus. fantasy, yet still wanted to attempt some of the activities viewed. This participant recounted an experience with a young gay male who was uncomfortable with the power dynamics in mainstream pornography; however, a paywall prevented his access to material he deemed more ethical. Participants’ responses to questions about ethical porn were to encourage a critical approach, allowing young people to assess the material they view through an ethical lens, as described above, to determine alignment to their own values around pleasure, consent, and racism, as examples. Another participant reported young males frequently raising the mental health impacts of viewing pornography with them, often outside of official education sessions. They reported young males feeling ashamed of their viewing habits, which they thought were unhealthy and which the participant felt was contributing to negative self‐esteem. 3.6 Harm Reduction Approach All participants discussed the importance of adopting a positive, solutions‐focused approach, to be encouraging of parents and provide them with practical advice, rather than focusing solely on pornography's potential harms. One participant acknowledged the content could be distressing; therefore, reinforcing the need to focus on education. Most participants discussed concerns about inadvertently entrenching parental fears, rather than addressing their concerns and providing evidence‐based education. ‘My aim is to try to educate people… the reality is shocking enough. You don't need to… make things seem bigger than they are.’ (Participant Five) ‘We see too many people taking the scare approach, rather than actually addressing parents' fears.’ (Participant One) A calmly executed solutions‐focussed approach was deemed an important strategy for parents to adopt, as parents' fear‐based approaches can engender strong reactions with their children, for example: ‘if parents aren't calm, if we get all stressed and upset and angry… that escalates things for our child or our teen even. So it's about trying to not scare parents, I really don't want to scare them and make them overwhelmed.’ (Participant Four) All participants agreed that a harm reduction approach seeking to provide information was more effective than advocating for abstinence from pornography. One stated: ‘I'm really acknowledging that completely limiting entirely children's exposure to… sexualized media and pornography. And whilst we might all want to achieve that, perhaps if that's your opinion, it's probably unlikely to be successful.’ (Participant Seven) This participant continued to reinforce the need for educators to focus on tangible actions, stating: ‘What can we do about it? Because being anti‐pornography is not going to make it go away. In fact, it might just shut down lines of communication in your house.’ (Participant Seven) Most participants spoke of being encouraging of parents in their innate roles and responsibilities to support their children's learning. ‘And I reassure them that… when kids have got a parent to support them… then porn is not going to screw them up. It's when they don't have a parent to talk to, that's when porn can be problematic for them.’ (Participant One) Most participants said they relied on humour to ‘lighten the mood’ at times where information could be distressing, or at the end of their sessions. This was seen as an effective way to maintain engagement and reduce potential anxiety. One participant noted that their main key message to parents was that ‘you can do this better than the porn industry will do this.’ (Participant Five) 3.7 Evaluation The majority of participants conducted some degree of evaluation of their resources or programs with parents, predominantly online surveys administered after education sessions. Only one participant stated that they did not gather any evaluation data. Most acknowledged difficulties in getting parents to engage in surveys, citing low response rates. One participant stated ‘if you… sent out 100 surveys, you probably get 10 back.’ (Participant Four) Another noted a desire to track resource downloads and website visits, though they cited skill gaps and resource limitations within their organisation as barriers. Two participants discussed specific data collected in pre‐and post‐surveys. Pre‐surveys were generally designed to gain insight into parent motivations for attending and to gauge their current knowledge on SEM, with post‐surveys asking about the key takeaway from the presentation. Content Delivery 3.1.1 Parent Perceptions of Content All participants stated that parental feedback on their education sessions was overwhelmingly positive. Two participants who deliver presentations to parents via school communities recalled school staff relaying positive feedback they received from parents. Most stated that parents were grateful to receive information about something they knew was important but did not feel equipped to address with their children or did not know where to go for support. Most participants reported gratitude from parents for practical information that supports them in discussing SEM with their children, such as conversation starters and take‐home resources. One participant stated this was specifically relevant for parents who were aware their children were watching pornography. This is supported by another participant's reflection that parents feel less scared once engaging with their content around young people and SEM. ‘It's giving them that confidence. They're happy with the knowledge, and it's dispelling a lot of their fears… that is the biggest feedback that I've got, is that they're not so terrified because most of the stuff out there is terrifying.’ (Participant One) Participants reported receiving little to no negative feedback from parents about their content. In fact, when asked about any criticisms received, one participant recalled receiving constructive criticism that their audience wanted more content on SEM and felt the school should be providing more SEM literacy content to students. Another participant acknowledged their parent education sessions are voluntary and therefore likely to attract a receptive audience, though this participant also notes that in their experience, pornography education may be less divisive than commonly thought, stating: ‘it cuts across left and right, conservative and liberal. …there'll be liberals who can't stand what I've got to say about it, they want individual choice… And then on the conservative side, there are absolutely conservatives who very much want us to be talking about this, because it's out of sync with values that they hold dearly. And then there are conservatives who think… ‘How dare you talk about that?’. So it's… a weird issue in that it cuts across all of those things.’ (Participant Five) Most participants reflected on being cognisant of the differing values and experiences of parents they engaged with when delivering content. One stated their audience was usually an ’even mix’ of parents who did and did not watch pornography. Another acknowledged similar challenges with parents who may ‘have their own trauma, or their own investment in porn use, or whatever it is, to bring them through feeling okay.’ (Participant Five) Also discussed were experts' experiences in being frequently asked about ethical porn as a potential means of conveying better messages to young people than mainstream pornography. ‘I get a lot of questions about ethical porn… they're like ‘hey, mainstream porn that you get at a click of a button is sending some potentially pretty problematic messages, so where do we go as an alternative?’ (Participant Seven) 3.1.2 Gender Differences A few participants reflected on the gender gap in their audiences, stating that their presentations were generally attended by mothers and mother figures, such as grandmothers or aunts. However, one participant did report fathers increasing in attendance in recent years. They attributed this to changing expectations of mothers and fathers. ‘There are some people that come in with that view that females are better… at talking to their girls and fathers are better off talking to their boys. But I'd say in more recent years, I don't see that as much as I did.’ (Participant Seven) This participant reflected on the intersection between gendered parent expectations and culture, specifically referencing their experiences providing online presentations to Middle Eastern countries. In these scenarios, they experienced challenges engaging fathers in communities ‘where the father figures do not talk about that’ as it was deemed either the mother's role, or not appropriate to discuss at all. 3.1.3 Lack of Awareness There was consensus amongst all participants that their parent audiences were generally unaware of the extent of young people's SEM access, how easy SEM is to access, and the type of material available. For example, participant five stated: ‘they often just have no idea, for example, that 46% of videos includes incest themes, or… that one in eight porn video titles describe behaviour that constitutes sexual violence and that level of gender aggression… and problematic messages, is quite often news to a lot of parents, particularly female parents.’ (Participant Five) The potential impacts of viewing pornography were frequently reported as new information for many parents. Some participants felt not all parents were ready to engage with their content because they believed their children were not watching pornography. Most participants reported parents' lack of sexuality education contributed to feelings of overwhelm, for example: ‘I actually have to start very basically. Parents have never had any sex or sexuality education in their life ever, hardly’ (Participant Two) 3.1.4 Maintaining Innocence Participants frequently cited parents not wanting to introduce the topic of SEM to their children too soon as a key reason for some parents not engaging with their content. This was particularly the case with parents of younger children. A specific concern was that proactive discussions with their children about SEM would make them seek pornography. ‘Another factor that definitely makes them pause is a fear of introducing something too soon. Yeah. I would say that… for parents of primary school aged children.’ (Participant Seven) ‘the biggest barrier for talking about porn is the fear that they're going to harm their child, or their child is going to start watching it because of the conversation.’ (Participant One) Different responses from parents based on the ages of their children were commonly discussed by participants as impacting education session attendance and/or how willing school communities were to promote the sessions due to anticipated parental backlash. ‘it's (the education session) not getting as much traction as we thought. And I think the reason is, because we've used the word pornography, and we're aiming it for parents of primary aged children.’ (Participant Four) ‘I think that the fear… is a significant factor for schools engaging with this issue, they're worried about parental response.’ (Participant Five) Parent Perceptions of Content All participants stated that parental feedback on their education sessions was overwhelmingly positive. Two participants who deliver presentations to parents via school communities recalled school staff relaying positive feedback they received from parents. Most stated that parents were grateful to receive information about something they knew was important but did not feel equipped to address with their children or did not know where to go for support. Most participants reported gratitude from parents for practical information that supports them in discussing SEM with their children, such as conversation starters and take‐home resources. One participant stated this was specifically relevant for parents who were aware their children were watching pornography. This is supported by another participant's reflection that parents feel less scared once engaging with their content around young people and SEM. ‘It's giving them that confidence. They're happy with the knowledge, and it's dispelling a lot of their fears… that is the biggest feedback that I've got, is that they're not so terrified because most of the stuff out there is terrifying.’ (Participant One) Participants reported receiving little to no negative feedback from parents about their content. In fact, when asked about any criticisms received, one participant recalled receiving constructive criticism that their audience wanted more content on SEM and felt the school should be providing more SEM literacy content to students. Another participant acknowledged their parent education sessions are voluntary and therefore likely to attract a receptive audience, though this participant also notes that in their experience, pornography education may be less divisive than commonly thought, stating: ‘it cuts across left and right, conservative and liberal. …there'll be liberals who can't stand what I've got to say about it, they want individual choice… And then on the conservative side, there are absolutely conservatives who very much want us to be talking about this, because it's out of sync with values that they hold dearly. And then there are conservatives who think… ‘How dare you talk about that?’. So it's… a weird issue in that it cuts across all of those things.’ (Participant Five) Most participants reflected on being cognisant of the differing values and experiences of parents they engaged with when delivering content. One stated their audience was usually an ’even mix’ of parents who did and did not watch pornography. Another acknowledged similar challenges with parents who may ‘have their own trauma, or their own investment in porn use, or whatever it is, to bring them through feeling okay.’ (Participant Five) Also discussed were experts' experiences in being frequently asked about ethical porn as a potential means of conveying better messages to young people than mainstream pornography. ‘I get a lot of questions about ethical porn… they're like ‘hey, mainstream porn that you get at a click of a button is sending some potentially pretty problematic messages, so where do we go as an alternative?’ (Participant Seven) Gender Differences A few participants reflected on the gender gap in their audiences, stating that their presentations were generally attended by mothers and mother figures, such as grandmothers or aunts. However, one participant did report fathers increasing in attendance in recent years. They attributed this to changing expectations of mothers and fathers. ‘There are some people that come in with that view that females are better… at talking to their girls and fathers are better off talking to their boys. But I'd say in more recent years, I don't see that as much as I did.’ (Participant Seven) This participant reflected on the intersection between gendered parent expectations and culture, specifically referencing their experiences providing online presentations to Middle Eastern countries. In these scenarios, they experienced challenges engaging fathers in communities ‘where the father figures do not talk about that’ as it was deemed either the mother's role, or not appropriate to discuss at all. Lack of Awareness There was consensus amongst all participants that their parent audiences were generally unaware of the extent of young people's SEM access, how easy SEM is to access, and the type of material available. For example, participant five stated: ‘they often just have no idea, for example, that 46% of videos includes incest themes, or… that one in eight porn video titles describe behaviour that constitutes sexual violence and that level of gender aggression… and problematic messages, is quite often news to a lot of parents, particularly female parents.’ (Participant Five) The potential impacts of viewing pornography were frequently reported as new information for many parents. Some participants felt not all parents were ready to engage with their content because they believed their children were not watching pornography. Most participants reported parents' lack of sexuality education contributed to feelings of overwhelm, for example: ‘I actually have to start very basically. Parents have never had any sex or sexuality education in their life ever, hardly’ (Participant Two) Maintaining Innocence Participants frequently cited parents not wanting to introduce the topic of SEM to their children too soon as a key reason for some parents not engaging with their content. This was particularly the case with parents of younger children. A specific concern was that proactive discussions with their children about SEM would make them seek pornography. ‘Another factor that definitely makes them pause is a fear of introducing something too soon. Yeah. I would say that… for parents of primary school aged children.’ (Participant Seven) ‘the biggest barrier for talking about porn is the fear that they're going to harm their child, or their child is going to start watching it because of the conversation.’ (Participant One) Different responses from parents based on the ages of their children were commonly discussed by participants as impacting education session attendance and/or how willing school communities were to promote the sessions due to anticipated parental backlash. ‘it's (the education session) not getting as much traction as we thought. And I think the reason is, because we've used the word pornography, and we're aiming it for parents of primary aged children.’ (Participant Four) ‘I think that the fear… is a significant factor for schools engaging with this issue, they're worried about parental response.’ (Participant Five) Engaging Parents—Barriers and Enablers 3.2.1 Difficulty Accessing Certain Parent Groups Most participants reported challenges accessing particular parent groups, such as those in rural and remote areas, culturally and linguistically diverse groups, and those of Aboriginal and/or Torres Strait Islander descent. All discussed recognising their audience's diversity, particularly when values and attitudes were underpinned by cultural or religious beliefs, and how this impacted whether parents chose to engage. Participant four expressed the following view about one cultural group: ‘And they decided no, that they didn't want their children involved. And the teacher said ‘oh my gosh, you know, it's such a shame, because… these children are not getting any discussion at home. They're curious, and they are the ones we're hearing at school who are accessing pornography.’ (Participant Four) 3.2.2 Confusion About Where to Access Support Two participants noted some parents being unaware of resources and services available, and the confusing landscape of online support. As a result, they felt parents relied on schools to provide this education, even though there was a lack of curricular support. Participant seven stated: ‘there are fantastic services, online resources online for young people and for families, but you don't know what you don't know. And what I found is lots of parents don't actually know where to go online, to get help and support’ (Participant Seven) ‘Although there's a huge amount of resources available today… there is still a lot of parents that are really feeling ill equipped to translate that into conversation and education with their children. A lot saying, ‘well, they're gonna get something at school, so I'll leave it to school.’ (Participant Seven) Another participant who also provides teacher training supported the incomplete nature of the Australian school curriculum, stating ‘it's completely inadequate, human sexuality is neglected in the Australian curriculum’ (Participant Two) In their opinion, this led to schools thinking they were providing comprehensive content, which the participant found to be incomplete due to a lack of guidance from the curriculum and regulation around what is delivered. 3.2.3 Internal Resourcing Two participants spoke about their resourcing challenges when engaging parents, citing insufficient time and funding to develop their resources and properly engage with parents. Most participants were from either small not‐for‐profit organisations or sole‐trader businesses, providing a mix of free and paid content. ‘So it's just me, so I don't have the time, resources to do as much as I'd like to, particularly in resource creation and development.’ (Participant Seven) Difficulty Accessing Certain Parent Groups Most participants reported challenges accessing particular parent groups, such as those in rural and remote areas, culturally and linguistically diverse groups, and those of Aboriginal and/or Torres Strait Islander descent. All discussed recognising their audience's diversity, particularly when values and attitudes were underpinned by cultural or religious beliefs, and how this impacted whether parents chose to engage. Participant four expressed the following view about one cultural group: ‘And they decided no, that they didn't want their children involved. And the teacher said ‘oh my gosh, you know, it's such a shame, because… these children are not getting any discussion at home. They're curious, and they are the ones we're hearing at school who are accessing pornography.’ (Participant Four) Confusion About Where to Access Support Two participants noted some parents being unaware of resources and services available, and the confusing landscape of online support. As a result, they felt parents relied on schools to provide this education, even though there was a lack of curricular support. Participant seven stated: ‘there are fantastic services, online resources online for young people and for families, but you don't know what you don't know. And what I found is lots of parents don't actually know where to go online, to get help and support’ (Participant Seven) ‘Although there's a huge amount of resources available today… there is still a lot of parents that are really feeling ill equipped to translate that into conversation and education with their children. A lot saying, ‘well, they're gonna get something at school, so I'll leave it to school.’ (Participant Seven) Another participant who also provides teacher training supported the incomplete nature of the Australian school curriculum, stating ‘it's completely inadequate, human sexuality is neglected in the Australian curriculum’ (Participant Two) In their opinion, this led to schools thinking they were providing comprehensive content, which the participant found to be incomplete due to a lack of guidance from the curriculum and regulation around what is delivered. Internal Resourcing Two participants spoke about their resourcing challenges when engaging parents, citing insufficient time and funding to develop their resources and properly engage with parents. Most participants were from either small not‐for‐profit organisations or sole‐trader businesses, providing a mix of free and paid content. ‘So it's just me, so I don't have the time, resources to do as much as I'd like to, particularly in resource creation and development.’ (Participant Seven) Effective Strategies for Engaging Parents 3.3.1 Variety of Formats A few participants spoke about delivering more recent workshops in an online format, which they felt parents found beneficial, particularly single parents who were time poor. Similarly, participants noted partnered parents liked to participate online together so they could ask each other questions during the session. Two participants offered a peer‐support platform to parents, which they found highly beneficial for peer interactions, though one did note it essential to provide parents the opportunity to ask questions confidentially. ‘There's real privacy around this stuff. And they're not often happy to talk in a way that they're identified.’ (Participant One) Most participants also discussed providing accompanying materials to their parent audiences to reinforce workshop content and support them to apply learnings. 3.3.2 Maintaining a Neutral Position on Pornography Due to the diversity in parent attitudes and experiences, most participants discussed the need to avoid a moral position on pornography and remain focused on delivering evidence‐based content. Participants reported experiences with parents rejecting content that presented a view contradicting their own values and experiences. This was discussed by participants who had experiences with parents with both conservative and progressive views of sexuality. In both scenarios, stigma and shame‐free messaging was deemed effective, for example: ‘I've had dads say they've been to other workshops around sexualized media where straight off… the bat, it's anti‐pornography and they have shut down. They said ‘I stopped listening because it was such an affront. I didn't like the way it was delivered on’ and I think we've got to be mindful of that as well.’ (Participant Seven) ‘There's no judging, I don't shame people for believing sex should only happen in marriage because (of) their values.’ (Participant One) One participant stated an anti‐pornography position was counter‐intuitive to what their content was trying to achieve in helping parents discuss and educate their children. They noted, ‘even if that's my view, someone else's view, that is not going to stop our kids seeing it, that's not helping us to have conversations.’ (Participant Seven) They continued to emphasise this point, elaborating the original purpose of their content being to educate. ‘I'm not presenting from one space, not anti‐pornography, not pro pornography. Just how can we have conversations, to equip our children with the skills and strategies they might need to navigate this.’ (Participant Seven) Another participant discussed encouraging parents to bring their own values to their content to help inform how they will approach the discussion with their children, with a focus on preparing and having the conversation. ‘looking at porn through an ethical lens… how does it sit with my views on gender and pleasure and consent and racism?… And I think that works really well with parents because they can draw it into their own family.’ (Participant Three) ‘how to… prepare for conversations, and some of the things to think about, how parents can work with their own… preconceived ideas (and) biases’ (Participant Three) 3.3.3 Creating a Safe Environment Participant two noted the importance of keeping people safe in an environment where they are learning about a topic which is both value‐laden and involving their children. They reflected on a similar skill set being required while providing education about the Human Immunodeficiency Virus (HIV) over the last two decades and highlighted the critical importance of SEM literacy sessions being delivered by well‐ trained experts. ‘Keeping people safe in a session like this is really important. And I've developed that skill over 18 years of educating, since back in the days of HIV when it first came out. So I think people feel really safe and cared for in that session.’ (Participant Two) 3.3.4 Focusing on General Media Literacy for Parents of Younger Children Most participants discussed challenges in engaging parents of younger children in discussions about pornography. They reported that focusing on general media literacy for younger, primary school‐aged children was beneficial in these scenarios, with a view to introducing sexual content at a later age deemed more appropriate. One participant stated: ‘media literacy is a really big foundational skill for me, because I think it's the biggest missing 21st century skill, we're just not doing it… And we can't just arrive at porn and start critiquing it, if we haven't taught people how to critique media they see in on social media and signs in advertising, in movies, whatever it is.’ (Participant Seven) Another supported this approach, and reflected on media literacy being included in school curricula where students are taught to critique television advertisements. This was seen as an essential skill before pornography literacy is introduced. ‘If we can build on these conversations… so that if they're watching a movie and a ghost comes out of a wall, they know that that's special effects… by the time they get to porn… they may actually already be cynical about what they're seeing.’ (Participant One) Variety of Formats A few participants spoke about delivering more recent workshops in an online format, which they felt parents found beneficial, particularly single parents who were time poor. Similarly, participants noted partnered parents liked to participate online together so they could ask each other questions during the session. Two participants offered a peer‐support platform to parents, which they found highly beneficial for peer interactions, though one did note it essential to provide parents the opportunity to ask questions confidentially. ‘There's real privacy around this stuff. And they're not often happy to talk in a way that they're identified.’ (Participant One) Most participants also discussed providing accompanying materials to their parent audiences to reinforce workshop content and support them to apply learnings. Maintaining a Neutral Position on Pornography Due to the diversity in parent attitudes and experiences, most participants discussed the need to avoid a moral position on pornography and remain focused on delivering evidence‐based content. Participants reported experiences with parents rejecting content that presented a view contradicting their own values and experiences. This was discussed by participants who had experiences with parents with both conservative and progressive views of sexuality. In both scenarios, stigma and shame‐free messaging was deemed effective, for example: ‘I've had dads say they've been to other workshops around sexualized media where straight off… the bat, it's anti‐pornography and they have shut down. They said ‘I stopped listening because it was such an affront. I didn't like the way it was delivered on’ and I think we've got to be mindful of that as well.’ (Participant Seven) ‘There's no judging, I don't shame people for believing sex should only happen in marriage because (of) their values.’ (Participant One) One participant stated an anti‐pornography position was counter‐intuitive to what their content was trying to achieve in helping parents discuss and educate their children. They noted, ‘even if that's my view, someone else's view, that is not going to stop our kids seeing it, that's not helping us to have conversations.’ (Participant Seven) They continued to emphasise this point, elaborating the original purpose of their content being to educate. ‘I'm not presenting from one space, not anti‐pornography, not pro pornography. Just how can we have conversations, to equip our children with the skills and strategies they might need to navigate this.’ (Participant Seven) Another participant discussed encouraging parents to bring their own values to their content to help inform how they will approach the discussion with their children, with a focus on preparing and having the conversation. ‘looking at porn through an ethical lens… how does it sit with my views on gender and pleasure and consent and racism?… And I think that works really well with parents because they can draw it into their own family.’ (Participant Three) ‘how to… prepare for conversations, and some of the things to think about, how parents can work with their own… preconceived ideas (and) biases’ (Participant Three) Creating a Safe Environment Participant two noted the importance of keeping people safe in an environment where they are learning about a topic which is both value‐laden and involving their children. They reflected on a similar skill set being required while providing education about the Human Immunodeficiency Virus (HIV) over the last two decades and highlighted the critical importance of SEM literacy sessions being delivered by well‐ trained experts. ‘Keeping people safe in a session like this is really important. And I've developed that skill over 18 years of educating, since back in the days of HIV when it first came out. So I think people feel really safe and cared for in that session.’ (Participant Two) Focusing on General Media Literacy for Parents of Younger Children Most participants discussed challenges in engaging parents of younger children in discussions about pornography. They reported that focusing on general media literacy for younger, primary school‐aged children was beneficial in these scenarios, with a view to introducing sexual content at a later age deemed more appropriate. One participant stated: ‘media literacy is a really big foundational skill for me, because I think it's the biggest missing 21st century skill, we're just not doing it… And we can't just arrive at porn and start critiquing it, if we haven't taught people how to critique media they see in on social media and signs in advertising, in movies, whatever it is.’ (Participant Seven) Another supported this approach, and reflected on media literacy being included in school curricula where students are taught to critique television advertisements. This was seen as an essential skill before pornography literacy is introduced. ‘If we can build on these conversations… so that if they're watching a movie and a ghost comes out of a wall, they know that that's special effects… by the time they get to porn… they may actually already be cynical about what they're seeing.’ (Participant One) Content Development 3.4.1 Parental Engagement Most participants discussed the critical importance of engaging parents in content development, which they felt did not happen enough, citing their perceptions that most current materials were youth‐targeted. Most felt a sole focus on educating young people about SEM, without adequately educating parents, is insufficient. Even with effective education, young people still need a parent guiding them. Another who delivers presentations to both parents and school students stated: ‘it's just inadequate to go into school, deliver content to children and walk out of the school without teaching the adults… if I had to spend money, it would be on the adults first… and that includes parents and teachers.’ (Participant Two) One participant cautioned being dismissive of parent fears in relation to school‐based education, highlighting the need to recognise parents' interest and concern, stating: ‘I don't dismiss parents' right to be concerned and have input into the frameworks with which their kids are being taught about this topic. Sometimes schools I think are like, ‘Oh, God, yeah annoying parents’, when it is actually appropriate for parents to care about what their children are learning on these issues’ (Participant Five) Parental Engagement Most participants discussed the critical importance of engaging parents in content development, which they felt did not happen enough, citing their perceptions that most current materials were youth‐targeted. Most felt a sole focus on educating young people about SEM, without adequately educating parents, is insufficient. Even with effective education, young people still need a parent guiding them. Another who delivers presentations to both parents and school students stated: ‘it's just inadequate to go into school, deliver content to children and walk out of the school without teaching the adults… if I had to spend money, it would be on the adults first… and that includes parents and teachers.’ (Participant Two) One participant cautioned being dismissive of parent fears in relation to school‐based education, highlighting the need to recognise parents' interest and concern, stating: ‘I don't dismiss parents' right to be concerned and have input into the frameworks with which their kids are being taught about this topic. Sometimes schools I think are like, ‘Oh, God, yeah annoying parents’, when it is actually appropriate for parents to care about what their children are learning on these issues’ (Participant Five) Youth Engagement The majority of participants discussed the importance of engaging young people to inform parent resources, as well as their own practice as educators, to ensure their content remains current. One participant stated that young people frequently raised the issue of ethical pornography with them and already possessed the ability to critique and determine what was reality versus. fantasy, yet still wanted to attempt some of the activities viewed. This participant recounted an experience with a young gay male who was uncomfortable with the power dynamics in mainstream pornography; however, a paywall prevented his access to material he deemed more ethical. Participants’ responses to questions about ethical porn were to encourage a critical approach, allowing young people to assess the material they view through an ethical lens, as described above, to determine alignment to their own values around pleasure, consent, and racism, as examples. Another participant reported young males frequently raising the mental health impacts of viewing pornography with them, often outside of official education sessions. They reported young males feeling ashamed of their viewing habits, which they thought were unhealthy and which the participant felt was contributing to negative self‐esteem. Harm Reduction Approach All participants discussed the importance of adopting a positive, solutions‐focused approach, to be encouraging of parents and provide them with practical advice, rather than focusing solely on pornography's potential harms. One participant acknowledged the content could be distressing; therefore, reinforcing the need to focus on education. Most participants discussed concerns about inadvertently entrenching parental fears, rather than addressing their concerns and providing evidence‐based education. ‘My aim is to try to educate people… the reality is shocking enough. You don't need to… make things seem bigger than they are.’ (Participant Five) ‘We see too many people taking the scare approach, rather than actually addressing parents' fears.’ (Participant One) A calmly executed solutions‐focussed approach was deemed an important strategy for parents to adopt, as parents' fear‐based approaches can engender strong reactions with their children, for example: ‘if parents aren't calm, if we get all stressed and upset and angry… that escalates things for our child or our teen even. So it's about trying to not scare parents, I really don't want to scare them and make them overwhelmed.’ (Participant Four) All participants agreed that a harm reduction approach seeking to provide information was more effective than advocating for abstinence from pornography. One stated: ‘I'm really acknowledging that completely limiting entirely children's exposure to… sexualized media and pornography. And whilst we might all want to achieve that, perhaps if that's your opinion, it's probably unlikely to be successful.’ (Participant Seven) This participant continued to reinforce the need for educators to focus on tangible actions, stating: ‘What can we do about it? Because being anti‐pornography is not going to make it go away. In fact, it might just shut down lines of communication in your house.’ (Participant Seven) Most participants spoke of being encouraging of parents in their innate roles and responsibilities to support their children's learning. ‘And I reassure them that… when kids have got a parent to support them… then porn is not going to screw them up. It's when they don't have a parent to talk to, that's when porn can be problematic for them.’ (Participant One) Most participants said they relied on humour to ‘lighten the mood’ at times where information could be distressing, or at the end of their sessions. This was seen as an effective way to maintain engagement and reduce potential anxiety. One participant noted that their main key message to parents was that ‘you can do this better than the porn industry will do this.’ (Participant Five) Evaluation The majority of participants conducted some degree of evaluation of their resources or programs with parents, predominantly online surveys administered after education sessions. Only one participant stated that they did not gather any evaluation data. Most acknowledged difficulties in getting parents to engage in surveys, citing low response rates. One participant stated ‘if you… sent out 100 surveys, you probably get 10 back.’ (Participant Four) Another noted a desire to track resource downloads and website visits, though they cited skill gaps and resource limitations within their organisation as barriers. Two participants discussed specific data collected in pre‐and post‐surveys. Pre‐surveys were generally designed to gain insight into parent motivations for attending and to gauge their current knowledge on SEM, with post‐surveys asking about the key takeaway from the presentation. Discussion Findings from this research confirm existing literature that parents support comprehensive RSE and SEM literacy education for their children . Parents' understanding of SEM literacy and specifically the contemporary context of young people's pornography use, however, may require development. Findings highlighted that parents were thankful for the information received, particularly introductory information containing pornography viewing statistics and potential impacts. Evidence supports a whole‐school approach to RSE as more effective than one‐off presentations or workshops. However, the lack of explicit guidance for SEM literacy education in the Australian curriculum may result in schools outsourcing this content to organisations such as those involved in this research, who provided education sessions privately and within schools. As such, participants in this study emphasised the importance of education being provided by well‐trained experts with effective quality assurance processes, who are adequately resourced to collect and implement evaluation data. It is also essential that providers are aware of their own biases and viewpoints, presenting a morally neutral stance towards SEM and pornography, to create safe, respectful environments for parents to learn. Some participants expressed concerns about relying heavily on fear‐based approaches as this is at odds with best practice and may alienate parents with contradictory views and experiences. It also risks perpetuating parent concerns about disrupting their children's perceived innocence, despite evidence showing the effectiveness of comprehensive RSE education in improving sexual health outcomes . These findings suggest merit in trauma‐informed approaches given the potentially sensitive topics that SEM literacy discussions can involve, such as sexual violence. Options for parents to engage further and ask questions in a confidential manner may also be effective, alongside online delivery options providing anonymity. The fear‐based approach was deemed ineffective for educative conversations with young people and contradictory to evidence that both parents and young people prefer approaches that support the ability to critique and analyse SEM . As such, participants in this study preferred to use humour and encourage calm responses for parents to engender productive conversations with their children. This allows parents to apply individualised approaches that align with the values they wish to share with their children, encouraging critique of messages presented in SEM through juxtaposition against these values. Peer support options may also be beneficial in these scenarios, as parents may feel more comfortable discussing approaches with others with whom they have shared values and experiences. Experts in this study discussed parents and young people inquiring about ethical pornography as potential alternatives providing messages more in line with healthy relationships , suggesting merit in research exploring parent perspectives towards young people's engagement with ethical pornography. These findings further support the need for effective youth engagement, even for parent‐targeted resources and programs. Recognising the value of a parent's role in educating their children was reported as essential, to counter feelings of powerlessness. Parents should be encouraged to be active participants in providing home‐based education and involved in school‐based education and policy, which should ideally align . This need for parents to be across their child's learning was supported both in this study and another Australian study which found parents (both comfortable and less‐comfortable with SEM literacy) would likely support school‐based education if they were sufficiently supported to continue this education at home . This supports the need for parental education on SEM literacy to mirror that which is provided in schools. This may help ensure alignment of content and messaging, reduce parental confusion about where to go for support, and reduce school apprehension towards providing SEM education. Schools, therefore, need to effectively communicate their curriculum content to parents. Innovative solutions are required to provide SEM literacy information to parents who do not engage due to their strong anti‐pornography views. Providing this content within a broader offering of sexual health topics and ensuring SEM literacy content includes the potential harms of SEM viewing may be effective. Furthermore, print and digital information sent to all parents, so as not to appear to be targeting specific parents, may be effective. This further supports the need for supportive school policy to embed SEM literacy in a whole school approach which includes information for parents, along with other strategies including access to optional education sessions such as those provided by participants in this research. This approach may also help to engage parents from hard‐to‐reach communities. While the focus of this research was on secondary schools, some participants did provide additional education sessions to parents of primary school‐aged children, which some parents of younger children found confronting. In these cases, there may be benefit in general media literacy education to help build children's critical awareness skills as essential scaffolding for future SEM literacy skill development, if and when deemed appropriate by the parents. Strengths and Limitations All experts were from not‐for‐profit organisations or were private providers who do not publish their results in peer‐reviewed journals. Therefore, this research is novel as it provides those working closely with parents the opportunity to contribute to the literature. A focus on parents as providers of SEM literacy education is also novel, as much of the existing literature pertains to youth or school systems. The small number of participants limits this study's generalisability. Searching for organisations on Google means that some organisations providing SEM literacy within a broader programme may have been missed if this content was not described on their website. All participants acknowledged that parents opt in for their presentations or resources, meaning their predominantly positive receptions may not mean all parents are comfortable with SEM literacy education. Participants discussed their own difficulties engaging parents less comfortable with their content, those in rural and remote areas, and those from culturally and linguistically diverse communities, meaning these findings cannot be seen as representative of those groups. Conclusion This article provides an analysis of how sexual health experts experience parental engagement on the topic of SEM literacy education. Results can assist health promotion efforts targeting parents to help them build their children's capacity to critique the SEM they are likely to view. Sexual health educators and promoters are encouraged to adopt a harm reduction approach focussing on parent's ability to help their children transition into adulthood with the necessary skills to critique SEM and form attitudes reflective of respectful, safe and pleasurable relationships. Together with the broader scoping review, this research adds to the limited published information about current parent‐targeted approaches and resources, in an environment where SEM is ubiquitous. Further research into specific parent needs from supporting resources and programs is warranted to ensure parents are adequately supported to perform their function as educators alongside schools, the broader community, and media. The study was approved by the Curtin University Human Research Ethics Committee (HRE2022‐0191). The authors declare no conflicts of interest. |
Enhancing pre-school teachers’ competence in managing pediatric injuries in Pemba Island, Zanzibar | 3bc9052a-68fd-4881-a6fd-1857fb83fae4 | 9716773 | Pediatrics[mh] | A safe and healthy learning environment in pre-schools has received increased attention in promoting the well-being of pre-school children . However, pediatric injuries have remained one of the leading causes of childhood morbidity and mortality around the globe. School children are the major risk group that faces traumatic conditions, like fractures, fainting, falling, drowning, and road traffic accidents which threaten their health during school life . Recent studies indicate that pediatric injuries account for 10 to 25% of injuries worldwide, which are happening among children when at schools . Compared with adults, more than six million children and young adults were injured seriously and needed emergency hospital services . Available statistics show that 300 children and teenagers die in Africa, and most of them are dying from unintentional injuries, drowning, poisoning falls, burns, and violence . Research shows that unintentional injuries among children usually happen frequently on school premises, especially on the sports ground . In Tanzania for example, 2.5% and 4.3% of persons were reported to have been injured during the previous year in urban and rural areas respectively of which 37% of the injured people were children below 14 years. An epidemiological study that was conducted in 2012 in rural and urban areas of Tanzania mainland demonstrates that 2.5% and 4.3% of individuals were wounded in the urban and rural areas respectively . Among the injuries, 37% were children below 14 years. Teenagers below 12 years have a greater vulnerability to injuries due to frequent falls . A cross-sectional survey conducted in Tanzania revealed that 47% of under-five children are at great risk of experiencing pediatric injuries . Incidences of pediatric injuries in some districts of Pemba Island in Zanzibar increases tremendously from a reported 4371 cases in 2018 to 4998 cases in 2019. Data indicate that injury cases in WETE and MICHEWENI districts are relatively higher. In terms of numbers, the cases are relatively lower in CHAKECHAKE and MKOANI districts. In some developing regions such as Zanzibar-Tanzania, lack of health workers (nurses and or doctors) in schools, and the distance from schools to health centers is relatively long for timely accessibility of emergency health services once a child is injured in school . Despite these challenges, the ministry of health in Zanzibar, through its five-year development plan (FYDP), objective number four, targets to improve partnerships among the public–private sector, private, sector, religious institutions, civil society organizations, and the community in the provision of health services. Experiences indicate that all these interventions focus only on the health centers alongside their health personnel and hardly involve other personnel such as teachers who are seemingly closest to the pupils in schools . With this regard, schools are very important ideal locations to consider when focusing on the prevention of injuries associated with pediatric conflicts, playing activities, the presence of swimming pools in some schools, and/or nutrition-related health emergencies such as hypoglycemia . However, scholarly works demonstrate that teachers in pre-schools have not yet been exposed to any formal training to provide first aid services to injured children. Yet, no single course in any pre-service program offers pre-school teachers opportunities to learn how to provide first aid service to pre-school children. In this situation, teachers feel not concerned and hold little support when confronting pediatric injuries and thus, may have the perception that it is the role of healthcare personnel . Now, it seems important to provide pre-school teachers with an opportunity to enhance their knowledge, attitude, and intentions to provide first aid services to daycare and boarding preschools. Attempting to enhance teachers’ knowledge, attitude, and competencies in providing first aid services are not new in the world. The American Academy of Pediatrics (AAP) brought a pediatric first aid course and first aid training among caregivers and teachers in 2005 that focused on educating and empowering them to demonstrate the confidence they need to care for sick and or injured children effectively . As it has worked elsewhere, findings of the first aid training have revealed a positive influence on preschool teachers' and caregivers’ competencies in managing pediatric emergencies in schools and homes respectively . However, little has been unfolded on the sustainable multidisciplinary pedagogical strategies to address them among children in pre-schools. an attempt to involve preschool teachers’ interventional programs to learn the provision of pediatric first aid has not yet been established in Zanzibar, Pemba Island in particular. In this paper, we are reporting findings from the first aid training among pre-school teachers that aimed at enhancing their knowledge, attitude, and intention to provide first aid to pre-school children in Pemba Island. Study design and approach This study aimed at enhancing preschool teachers’ competence in managing pediatric injuries in preschool children in Pemba Island, Zanzibar from September to April 2020. The study adopted an uncontrolled quasi-experimental design that consisted of a pre-test that established the preschool teachers’ baseline knowledge, attitude, and intention to practice first aid management at school premises. It was implemented in teachers’ resource centers at Zanzibar based on the institutional regulations and guidelines for postgraduate studies at the University of Dodoma, Tanzania. Quasi-experimental studies can be implemented quantitatively, qualitatively, or in a mixed research approach either in a controlled (Having two groups of which one serves as intervention/treatment and another as a control group respectively) and uncontrolled style (Having only one group that is treated as the intervention and control) . The post-test served as an end-line assessment to measure the effect of the intervention on preschool teachers’ knowledge, attitude, and intention to practice first aid management to preschool children at schools. The post-test questionnaires had an equivalent number of items per variable to the pre-test questionnaires that were administered to the same participants during baseline assessment. The consented participants filled out the pre-and-post-intervention questionnaires in a separate unoccupied class to ensure confidentiality and privacy. Brief instructions on how to fill out questionnaires were provided by the principal investigator and assistants before distributing them to study participants. The principal investigator and the assistant supervised the process and were available to respond to participants’ queries throughout the process. A schedule for the training was then shared among the study participants, which was scheduled to start a week later after the day of the pre-test (baseline assessment). Using the same study site, the design was aligned with a quantitative research approach to quantify preschool teachers’ sociodemographic characteristics profiles and variables of interests under study including the intervention (independent variable), first aid knowledge, attitude, and intention to practice the provision of pediatric first aid services to pre-school children. The design was opted to cater to the simplicity of quantifying the variables under study and controlling information contamination among the consented preschool teachers. The intervention Training materials To assure the validity of the training, first aid training materials used in this study were adapted from the American Academy of Pediatrics and implemented based on the prescribed guidelines and standards . Intervention implementation team Information in Table shows that 15-trained personnel who also had expertise in teaching and or providing health services implemented the training process as a study intervention. They included nine males and six females based on their consent to take the assigned roles of the intervention. The trained personnel were required to have at least a tertiary level of education with working experience of at least 1 year to be recruited in the study. Out of 15 trained personnel, 4 (26.7%) were nurse educators, 3 (20.0%) were clinical instructors, 3 (20.0%) were medical doctors and 5 (33.3%) were teachers. All of the trained personnel were residing in Zanzibar. Each trained personnel was assigned to train one group of study participants throughout the intervention timelines. Timelines of the intervention to the administration of post-tests among study participants The intervention was conducted in Zanzibar using unoccupied Teachers’ Resource Centers (TRCs) as venues for the first aid management training sessions. As shown in Table , the intervention consisted of four phases including phase one (September to January 2020) which consisted of the development of a proposal and first aid training materials, experts’ appraisal, and prototyping of the first aid training materials. Phase two (February 2020) involved the recruitment of the study participants and baseline assessment to record pre-school teachers’ sociodemographic characteristics profiles, prior knowledge, attitude, and intention to provide pediatric first aid services to pre-school children. Phase three (March to April 2020) was six weeks of first aid training among pre-school teachers with a duration of three sessions per day. Phase four (April 2020) served as an end-line assessment in which study participants were administered post-tests to assess their first aid knowledge, attitude, and intention to practice first aid management to preschool children on preschool premises. Post-test (End-line assessment) was set to be administered immediately after the intervention, which was defined in this study as one week after first aid management training. Composition of the training package The training was implemented in groups of which each group consisted of eight members making a total of 15 groups that were trained for 6 weeks with an average of one topic per session based on the negotiated schedule of the sampled schools. As indicated in Table , brainstorming, discussions, 5 to 8 members’ group works, video, and demonstrations were the main pedagogical strategies used to facilitate first-aid learning among preschool teachers. Sessions’ duration ranged from 30 to 90 min. Evening hours (From 03:00 pm to approximately 04:00 pm) were used to facilitate the training not only as negotiated by the study participants but, also as a mechanism of not interrupting schools' teaching and learning activities. Trained personnel (nurse educators, clinical instructors, and medical doctors) would have finished their morning duties at their working stations and have some rest to implement the intervention. The training involved the theoretical part (Concepts of first aid including its characteristics, principles, advantages, indications/health problems that may need first aid, resources/first aid kit needed for first aid) and the practical part (Facilitators’ demonstrations and return demonstrations by the study participants on when and how to provide first aid management to children). The first two weeks of the training were used for facilitating the theoretical part of the package whereas the other four weeks were for practical activities. Demonstrations were flexible based on participants’ requests to repeat them to the saturation, which also served as a formative assessment of their mastery in providing first aid management to pre-school children. Participants and settings A stratified random sampling technique by random numbers table was used to select 45 government-based schools out of 55 and 30 private-based schools out of 37 located in the north Pemba region in Zanzibar islands, Tanzania. The type of school based on ownership was set as a criterion to stratify schools into two strata including government-owned schools and private-owned schools strata respectively. A statistician independent of this study performed stratified sampling procedures. The consented preschool teachers working in the selected study settings were included in the study with an exception of those who were sick, participating in other projects, and those who were under school special activities. Sample size determination A total of 217 preschool teachers were eligible to join the study. The following formula was used to calculate the minimum sample size of the current study as recommended by previous studies 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=\frac{{\left\{\mathrm{\rm Z}\mathrm{\alpha }\surd [\mathrm{\pi o }(1-\mathrm{\pi o})] +\mathrm{Z\beta }\surd [\uppi 1 (1-\uppi 1)]\right\}}^{2}}{{(\uppi 1-\mathrm{\pi o})}^{2}}$$\end{document} n = Z α √ [ π o ( 1 - π o ) ] + Z β √ [ π 1 ( 1 - π 1 ) ] 2 ( π 1 - π o ) 2 whereas Zα was set at 1.96 from the normal distribution table; Zβ was set at 0.80; Mean zero (π 0 ) and Mean one (π 1 ) were adapted from the previous study. As shown in Fig. , 120 pre-school teachers were sampled to participate in the pediatric first aid training program conducted in two teachers’ resource centers (TRCs) and their information was analysed after the end-line. Data collection tools Tables , , and shows questionnaire items per outcome variable. The study adopted questionnaires from previous studies and modified by the principal investigator supported by the consulted research experts, colleagues and statisticians to fit the Tanzanian context. Characterization of a research tool The tool collected participants’ socio-demographic data such as name, sex, residence, marital status, educational level, years of teaching experiences, areas of residence (10 items), knowledge (20 items) attitude (9 items), and intention to practice first aid management among pre-school children (5 items). For analysis purposes, the items' responses were structured into “Yes/No” with “Yes” responses valuing a “1” score and “No” having a value of “0” score. The endpoints of the outcome variables of interest were dichotomized for easy interpretations. Pediatric first aid knowledge was assessed on participants’ knowledge assessment basis while the intention to practice pediatric first aid management was as assessed in self-reported readiness and willingness to provide pediatric first aid services once they confront pediatric injuries among preschool children at school premises. Highest post-test scores in preschool teachers’ pediatric first aid knowledge (≥ 10 pass points), attitude (≥ 5 pass points), and intentions (≥ 3 pass points of any intended action that implied preschool teacher’s readiness and willingness to provide pediatric first aid service to preschool children) controlled with other factors at 95% confidence interval and probability α = ≤ 5%(0.05) against the baseline findings was considered as a significant gain due to the intervention. Validity and reliability of the study The research tools weres then shared with research experts, expert colleagues, and statisticians for their proof before subjecting them to a pilot study that was conducted among 20 pre-school teachers (10%) of the study sample before data collection. Exploratory factor analysis was performed for item reduction to get the highly weighed items above the statistically suggested threshold (> 0.3) as recommended by previous studies . The correlation coefficient was set at a cut-off point of ≥ 0.30 whereas, the Kaiser-Meyer-Oklin (KMO) value of ≥ 0.5 and probability α < 0.05 were used to assess sampling adequacy set at a cut-off point of ≥ 0.60. Findings of the explorative analysis of the questionnaires indicated that 34 items (Knowledge: n = 20 items; Attitude: n = 9 items and intention to practice first aid management: n = 5 items) weighed above a cut-off point of ≥ 0.30 and thus, were retained for further analysis. Then a scale analysis was performed and findings revealed a Cronbach’s Alpha of 0.711 that bared approximately similar psychometric properties to the original questionnaires (α = 0.799). Findings from the reliability test implied a tool was reliable to be used in the actual field data collection and measure first aid knowledge, attitude, and intention to practice among pre-school teachers . Data analysis The Statistical Packages for Social Sciences (SPSS) version 25 was used to analyze data. The tool was used for analysis to establish quantifiable data about preschool teachers’ knowledge, attitude, and intention to practice first aid management among preschool children, which were the objectives of this study respectively. Moreover, the statistical tool (SPSS) was adopted to establish the effect of an intervention over the outcome variables controlled for other factors such as pre-school teachers’ socio-demographic characteristics profiles. Descriptive analysis was performed to establish participants’ socio-demographic characteristics profiles. Mean score differences in the pediatric first aid knowledge, attitude, and intention to practice pediatric first aid management between the pre-test and post-tests were determined by paired t-test analysis model. As recommended by previous scholars, the effect size of the intervention over the outcome variables of interest was calculated using the effect size calculator for the t-test of the pre-posttest by Cohen’s d and then dividing the results by the pooled standard deviation . The following formula was used for calculating the effect size of the intervention in this study 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Cohen\mathrm{^{\prime}}s d = (M2-M1)/{SD}_{pooled}$$\end{document} C o h e n ′ s d = ( M 2 - M 1 ) / SD pooled 3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$SD pooled = \surd (( {SD}_{1}^{2}+ {SD}_{2}^{2})/2$$\end{document} S D p o o l e d = √ ( ( SD 1 2 + SD 2 2 ) / 2 whereby \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M_2=Meantwo;M_1=Meanone;SD=Standard\;deviations\;and\;{SD}_{pooled}=Pooled\;standard\;deviations$$\end{document} M 2 = M e a n t w o ; M 1 = M e a n o n e ; S D = S t a n d a r d d e v i a t i o n s a n d SD pooled = P o o l e d s t a n d a r d d e v i a t i o n s Univariate and multivariable logistic regression was performed to demonstrate the association between first aid training controlled to participants’ sociodemographic characteristics profiles and outcome variables (First aid knowledge, attitude, and intention to practice the provision of first aid services to preschool children). The confidence interval (CI) was set at 95%with the value of demonstrating a statistically significant difference set at α = 5% ( p < 0.05) and β = 0.80 (Power of the study) to demonstrate the effect of the intervention controlled with other co-related factors over the outcome of interest under study. The effect size of ≥ 1 is equivalent to ≥ 10% of the gain in preschool teachers’ pediatric first aid knowledge, attitude, and intentions to provide pediatric first aid services to preschool children on school premises. Ethical consideration The University of Dodoma (UDOM) Institutional Research Review Committee (IRRC) reviewed, approved, and issued an ethical permit for the study to be conducted through ethical clearance number UDOM/DRP/134/VOL VII/. The government of Zanzibar granted research ethical permit number OMPR/M.95/C.6/2/VOL. 6/13 and Zanzibar Health Research Institute (ZAHRI) through Zanzibar Health Research Ethical Committee (ZAHREC) granted research ethical permit number ZAHREC/03/ST/JUNE/2020/109 to reach and conduct the study in schools. Ethical Clearance to reach preschools was approved by the headteachers of the respective schools. Written informed consent was obtained from each participant by the principal investigator as one of the criteria for them to join the study. Anonymity procedures were adhered to ensure the confidentiality of participants’ particulars. Data were handled and secured by the principal investigator through a keyed folder. This study aimed at enhancing preschool teachers’ competence in managing pediatric injuries in preschool children in Pemba Island, Zanzibar from September to April 2020. The study adopted an uncontrolled quasi-experimental design that consisted of a pre-test that established the preschool teachers’ baseline knowledge, attitude, and intention to practice first aid management at school premises. It was implemented in teachers’ resource centers at Zanzibar based on the institutional regulations and guidelines for postgraduate studies at the University of Dodoma, Tanzania. Quasi-experimental studies can be implemented quantitatively, qualitatively, or in a mixed research approach either in a controlled (Having two groups of which one serves as intervention/treatment and another as a control group respectively) and uncontrolled style (Having only one group that is treated as the intervention and control) . The post-test served as an end-line assessment to measure the effect of the intervention on preschool teachers’ knowledge, attitude, and intention to practice first aid management to preschool children at schools. The post-test questionnaires had an equivalent number of items per variable to the pre-test questionnaires that were administered to the same participants during baseline assessment. The consented participants filled out the pre-and-post-intervention questionnaires in a separate unoccupied class to ensure confidentiality and privacy. Brief instructions on how to fill out questionnaires were provided by the principal investigator and assistants before distributing them to study participants. The principal investigator and the assistant supervised the process and were available to respond to participants’ queries throughout the process. A schedule for the training was then shared among the study participants, which was scheduled to start a week later after the day of the pre-test (baseline assessment). Using the same study site, the design was aligned with a quantitative research approach to quantify preschool teachers’ sociodemographic characteristics profiles and variables of interests under study including the intervention (independent variable), first aid knowledge, attitude, and intention to practice the provision of pediatric first aid services to pre-school children. The design was opted to cater to the simplicity of quantifying the variables under study and controlling information contamination among the consented preschool teachers. Training materials To assure the validity of the training, first aid training materials used in this study were adapted from the American Academy of Pediatrics and implemented based on the prescribed guidelines and standards . Intervention implementation team Information in Table shows that 15-trained personnel who also had expertise in teaching and or providing health services implemented the training process as a study intervention. They included nine males and six females based on their consent to take the assigned roles of the intervention. The trained personnel were required to have at least a tertiary level of education with working experience of at least 1 year to be recruited in the study. Out of 15 trained personnel, 4 (26.7%) were nurse educators, 3 (20.0%) were clinical instructors, 3 (20.0%) were medical doctors and 5 (33.3%) were teachers. All of the trained personnel were residing in Zanzibar. Each trained personnel was assigned to train one group of study participants throughout the intervention timelines. Timelines of the intervention to the administration of post-tests among study participants The intervention was conducted in Zanzibar using unoccupied Teachers’ Resource Centers (TRCs) as venues for the first aid management training sessions. As shown in Table , the intervention consisted of four phases including phase one (September to January 2020) which consisted of the development of a proposal and first aid training materials, experts’ appraisal, and prototyping of the first aid training materials. Phase two (February 2020) involved the recruitment of the study participants and baseline assessment to record pre-school teachers’ sociodemographic characteristics profiles, prior knowledge, attitude, and intention to provide pediatric first aid services to pre-school children. Phase three (March to April 2020) was six weeks of first aid training among pre-school teachers with a duration of three sessions per day. Phase four (April 2020) served as an end-line assessment in which study participants were administered post-tests to assess their first aid knowledge, attitude, and intention to practice first aid management to preschool children on preschool premises. Post-test (End-line assessment) was set to be administered immediately after the intervention, which was defined in this study as one week after first aid management training. Composition of the training package The training was implemented in groups of which each group consisted of eight members making a total of 15 groups that were trained for 6 weeks with an average of one topic per session based on the negotiated schedule of the sampled schools. As indicated in Table , brainstorming, discussions, 5 to 8 members’ group works, video, and demonstrations were the main pedagogical strategies used to facilitate first-aid learning among preschool teachers. Sessions’ duration ranged from 30 to 90 min. Evening hours (From 03:00 pm to approximately 04:00 pm) were used to facilitate the training not only as negotiated by the study participants but, also as a mechanism of not interrupting schools' teaching and learning activities. Trained personnel (nurse educators, clinical instructors, and medical doctors) would have finished their morning duties at their working stations and have some rest to implement the intervention. The training involved the theoretical part (Concepts of first aid including its characteristics, principles, advantages, indications/health problems that may need first aid, resources/first aid kit needed for first aid) and the practical part (Facilitators’ demonstrations and return demonstrations by the study participants on when and how to provide first aid management to children). The first two weeks of the training were used for facilitating the theoretical part of the package whereas the other four weeks were for practical activities. Demonstrations were flexible based on participants’ requests to repeat them to the saturation, which also served as a formative assessment of their mastery in providing first aid management to pre-school children. To assure the validity of the training, first aid training materials used in this study were adapted from the American Academy of Pediatrics and implemented based on the prescribed guidelines and standards . Information in Table shows that 15-trained personnel who also had expertise in teaching and or providing health services implemented the training process as a study intervention. They included nine males and six females based on their consent to take the assigned roles of the intervention. The trained personnel were required to have at least a tertiary level of education with working experience of at least 1 year to be recruited in the study. Out of 15 trained personnel, 4 (26.7%) were nurse educators, 3 (20.0%) were clinical instructors, 3 (20.0%) were medical doctors and 5 (33.3%) were teachers. All of the trained personnel were residing in Zanzibar. Each trained personnel was assigned to train one group of study participants throughout the intervention timelines. The intervention was conducted in Zanzibar using unoccupied Teachers’ Resource Centers (TRCs) as venues for the first aid management training sessions. As shown in Table , the intervention consisted of four phases including phase one (September to January 2020) which consisted of the development of a proposal and first aid training materials, experts’ appraisal, and prototyping of the first aid training materials. Phase two (February 2020) involved the recruitment of the study participants and baseline assessment to record pre-school teachers’ sociodemographic characteristics profiles, prior knowledge, attitude, and intention to provide pediatric first aid services to pre-school children. Phase three (March to April 2020) was six weeks of first aid training among pre-school teachers with a duration of three sessions per day. Phase four (April 2020) served as an end-line assessment in which study participants were administered post-tests to assess their first aid knowledge, attitude, and intention to practice first aid management to preschool children on preschool premises. Post-test (End-line assessment) was set to be administered immediately after the intervention, which was defined in this study as one week after first aid management training. The training was implemented in groups of which each group consisted of eight members making a total of 15 groups that were trained for 6 weeks with an average of one topic per session based on the negotiated schedule of the sampled schools. As indicated in Table , brainstorming, discussions, 5 to 8 members’ group works, video, and demonstrations were the main pedagogical strategies used to facilitate first-aid learning among preschool teachers. Sessions’ duration ranged from 30 to 90 min. Evening hours (From 03:00 pm to approximately 04:00 pm) were used to facilitate the training not only as negotiated by the study participants but, also as a mechanism of not interrupting schools' teaching and learning activities. Trained personnel (nurse educators, clinical instructors, and medical doctors) would have finished their morning duties at their working stations and have some rest to implement the intervention. The training involved the theoretical part (Concepts of first aid including its characteristics, principles, advantages, indications/health problems that may need first aid, resources/first aid kit needed for first aid) and the practical part (Facilitators’ demonstrations and return demonstrations by the study participants on when and how to provide first aid management to children). The first two weeks of the training were used for facilitating the theoretical part of the package whereas the other four weeks were for practical activities. Demonstrations were flexible based on participants’ requests to repeat them to the saturation, which also served as a formative assessment of their mastery in providing first aid management to pre-school children. A stratified random sampling technique by random numbers table was used to select 45 government-based schools out of 55 and 30 private-based schools out of 37 located in the north Pemba region in Zanzibar islands, Tanzania. The type of school based on ownership was set as a criterion to stratify schools into two strata including government-owned schools and private-owned schools strata respectively. A statistician independent of this study performed stratified sampling procedures. The consented preschool teachers working in the selected study settings were included in the study with an exception of those who were sick, participating in other projects, and those who were under school special activities. Sample size determination A total of 217 preschool teachers were eligible to join the study. The following formula was used to calculate the minimum sample size of the current study as recommended by previous studies 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=\frac{{\left\{\mathrm{\rm Z}\mathrm{\alpha }\surd [\mathrm{\pi o }(1-\mathrm{\pi o})] +\mathrm{Z\beta }\surd [\uppi 1 (1-\uppi 1)]\right\}}^{2}}{{(\uppi 1-\mathrm{\pi o})}^{2}}$$\end{document} n = Z α √ [ π o ( 1 - π o ) ] + Z β √ [ π 1 ( 1 - π 1 ) ] 2 ( π 1 - π o ) 2 whereas Zα was set at 1.96 from the normal distribution table; Zβ was set at 0.80; Mean zero (π 0 ) and Mean one (π 1 ) were adapted from the previous study. As shown in Fig. , 120 pre-school teachers were sampled to participate in the pediatric first aid training program conducted in two teachers’ resource centers (TRCs) and their information was analysed after the end-line. A total of 217 preschool teachers were eligible to join the study. The following formula was used to calculate the minimum sample size of the current study as recommended by previous studies 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=\frac{{\left\{\mathrm{\rm Z}\mathrm{\alpha }\surd [\mathrm{\pi o }(1-\mathrm{\pi o})] +\mathrm{Z\beta }\surd [\uppi 1 (1-\uppi 1)]\right\}}^{2}}{{(\uppi 1-\mathrm{\pi o})}^{2}}$$\end{document} n = Z α √ [ π o ( 1 - π o ) ] + Z β √ [ π 1 ( 1 - π 1 ) ] 2 ( π 1 - π o ) 2 whereas Zα was set at 1.96 from the normal distribution table; Zβ was set at 0.80; Mean zero (π 0 ) and Mean one (π 1 ) were adapted from the previous study. As shown in Fig. , 120 pre-school teachers were sampled to participate in the pediatric first aid training program conducted in two teachers’ resource centers (TRCs) and their information was analysed after the end-line. Tables , , and shows questionnaire items per outcome variable. The study adopted questionnaires from previous studies and modified by the principal investigator supported by the consulted research experts, colleagues and statisticians to fit the Tanzanian context. Characterization of a research tool The tool collected participants’ socio-demographic data such as name, sex, residence, marital status, educational level, years of teaching experiences, areas of residence (10 items), knowledge (20 items) attitude (9 items), and intention to practice first aid management among pre-school children (5 items). For analysis purposes, the items' responses were structured into “Yes/No” with “Yes” responses valuing a “1” score and “No” having a value of “0” score. The endpoints of the outcome variables of interest were dichotomized for easy interpretations. Pediatric first aid knowledge was assessed on participants’ knowledge assessment basis while the intention to practice pediatric first aid management was as assessed in self-reported readiness and willingness to provide pediatric first aid services once they confront pediatric injuries among preschool children at school premises. Highest post-test scores in preschool teachers’ pediatric first aid knowledge (≥ 10 pass points), attitude (≥ 5 pass points), and intentions (≥ 3 pass points of any intended action that implied preschool teacher’s readiness and willingness to provide pediatric first aid service to preschool children) controlled with other factors at 95% confidence interval and probability α = ≤ 5%(0.05) against the baseline findings was considered as a significant gain due to the intervention. The tool collected participants’ socio-demographic data such as name, sex, residence, marital status, educational level, years of teaching experiences, areas of residence (10 items), knowledge (20 items) attitude (9 items), and intention to practice first aid management among pre-school children (5 items). For analysis purposes, the items' responses were structured into “Yes/No” with “Yes” responses valuing a “1” score and “No” having a value of “0” score. The endpoints of the outcome variables of interest were dichotomized for easy interpretations. Pediatric first aid knowledge was assessed on participants’ knowledge assessment basis while the intention to practice pediatric first aid management was as assessed in self-reported readiness and willingness to provide pediatric first aid services once they confront pediatric injuries among preschool children at school premises. Highest post-test scores in preschool teachers’ pediatric first aid knowledge (≥ 10 pass points), attitude (≥ 5 pass points), and intentions (≥ 3 pass points of any intended action that implied preschool teacher’s readiness and willingness to provide pediatric first aid service to preschool children) controlled with other factors at 95% confidence interval and probability α = ≤ 5%(0.05) against the baseline findings was considered as a significant gain due to the intervention. The research tools weres then shared with research experts, expert colleagues, and statisticians for their proof before subjecting them to a pilot study that was conducted among 20 pre-school teachers (10%) of the study sample before data collection. Exploratory factor analysis was performed for item reduction to get the highly weighed items above the statistically suggested threshold (> 0.3) as recommended by previous studies . The correlation coefficient was set at a cut-off point of ≥ 0.30 whereas, the Kaiser-Meyer-Oklin (KMO) value of ≥ 0.5 and probability α < 0.05 were used to assess sampling adequacy set at a cut-off point of ≥ 0.60. Findings of the explorative analysis of the questionnaires indicated that 34 items (Knowledge: n = 20 items; Attitude: n = 9 items and intention to practice first aid management: n = 5 items) weighed above a cut-off point of ≥ 0.30 and thus, were retained for further analysis. Then a scale analysis was performed and findings revealed a Cronbach’s Alpha of 0.711 that bared approximately similar psychometric properties to the original questionnaires (α = 0.799). Findings from the reliability test implied a tool was reliable to be used in the actual field data collection and measure first aid knowledge, attitude, and intention to practice among pre-school teachers . The Statistical Packages for Social Sciences (SPSS) version 25 was used to analyze data. The tool was used for analysis to establish quantifiable data about preschool teachers’ knowledge, attitude, and intention to practice first aid management among preschool children, which were the objectives of this study respectively. Moreover, the statistical tool (SPSS) was adopted to establish the effect of an intervention over the outcome variables controlled for other factors such as pre-school teachers’ socio-demographic characteristics profiles. Descriptive analysis was performed to establish participants’ socio-demographic characteristics profiles. Mean score differences in the pediatric first aid knowledge, attitude, and intention to practice pediatric first aid management between the pre-test and post-tests were determined by paired t-test analysis model. As recommended by previous scholars, the effect size of the intervention over the outcome variables of interest was calculated using the effect size calculator for the t-test of the pre-posttest by Cohen’s d and then dividing the results by the pooled standard deviation . The following formula was used for calculating the effect size of the intervention in this study 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Cohen\mathrm{^{\prime}}s d = (M2-M1)/{SD}_{pooled}$$\end{document} C o h e n ′ s d = ( M 2 - M 1 ) / SD pooled 3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$SD pooled = \surd (( {SD}_{1}^{2}+ {SD}_{2}^{2})/2$$\end{document} S D p o o l e d = √ ( ( SD 1 2 + SD 2 2 ) / 2 whereby \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M_2=Meantwo;M_1=Meanone;SD=Standard\;deviations\;and\;{SD}_{pooled}=Pooled\;standard\;deviations$$\end{document} M 2 = M e a n t w o ; M 1 = M e a n o n e ; S D = S t a n d a r d d e v i a t i o n s a n d SD pooled = P o o l e d s t a n d a r d d e v i a t i o n s Univariate and multivariable logistic regression was performed to demonstrate the association between first aid training controlled to participants’ sociodemographic characteristics profiles and outcome variables (First aid knowledge, attitude, and intention to practice the provision of first aid services to preschool children). The confidence interval (CI) was set at 95%with the value of demonstrating a statistically significant difference set at α = 5% ( p < 0.05) and β = 0.80 (Power of the study) to demonstrate the effect of the intervention controlled with other co-related factors over the outcome of interest under study. The effect size of ≥ 1 is equivalent to ≥ 10% of the gain in preschool teachers’ pediatric first aid knowledge, attitude, and intentions to provide pediatric first aid services to preschool children on school premises. The University of Dodoma (UDOM) Institutional Research Review Committee (IRRC) reviewed, approved, and issued an ethical permit for the study to be conducted through ethical clearance number UDOM/DRP/134/VOL VII/. The government of Zanzibar granted research ethical permit number OMPR/M.95/C.6/2/VOL. 6/13 and Zanzibar Health Research Institute (ZAHRI) through Zanzibar Health Research Ethical Committee (ZAHREC) granted research ethical permit number ZAHREC/03/ST/JUNE/2020/109 to reach and conduct the study in schools. Ethical Clearance to reach preschools was approved by the headteachers of the respective schools. Written informed consent was obtained from each participant by the principal investigator as one of the criteria for them to join the study. Anonymity procedures were adhered to ensure the confidentiality of participants’ particulars. Data were handled and secured by the principal investigator through a keyed folder. The study recruited 120 pre-school teachers who completed all the study cycles (100% response rate). As shown in Table , the mean age of the study participants was 32 ± 6.2 years with 84.2% of the sample being females. The majority (78%) of them were working in the public sector. Mean score differences of participants’ knowledge, attitude, and intentions to practice first-aid management to children between pre-test and post-test Findings in Table illustrate the results from the paired t-test analysis model, which indicated that there was a statistically significant increase ( p < 0.01) in participants’ first aid management scores after the training with mean differences of M = 7.47 ± 2.70 (Pre-test) and M = 15.08 ± 5.34 (Post-test) at t = 22.860 (t-value) set at a degree of freedom (df: n = 119). With regard to Cohen’s d classifications of effect sizes , the effect size of first aid management training on pre-school teachers’ knowledge was significantly high (Cohen’s d = 1.80). On the other hand, post-test findings demonstrated a significant gain ( p < 0.01) in participants’ first aid management scores after the training with a mean difference of M = 11.45 ± 3.067 (Pre-test) and M = 26.99 ± 6.587 (Post-test) at 27.372 (t-value) set at a degree of freedom (df: n = 119). Based on Cohen’s d classifications of effect sizes, the effect size of first aid management training on pre-school teachers’ attitudes was significantly high (Cohen’s d = 3.02). Moreover, there was a significant increase ( p < 0.01) in participants’ first aid management scores of intention to practice first aid management to preschool children after the training with mean differences of M = 1.92 ± 1.553 (Pre-test) and M = 4.76 ± 0.648 (Post-test) at t = 8.808 (t-value) set a degree of freedom (df: n = 119). The effect size of first aid management training on an intentional practice was significantly high (Cohen’s d = 2.39) based on the classifications of effect sizes. The association between participants’ sociodemographic characteristics profiles and first aid knowledge, attitude, and intention to practice the provision of first aid services to children To establish the association between participants’ sociodemographic characteristics profiles and First aid knowledge, attitude, and intention to practice the provision of first aid services to children, regression analysis was performed. As shown in Tables , , and the controlled odds of the first aid training influencing first aid knowledge, attitude, and intention to practice the provision of first aid services to preschool children among preschool teachers were significantly higher (AOR = 2.304; p < 0.01; 95%CI: 1.037, 5.939), (AOR = 1.039; p < 0.01; 95%CI: 0.658, 2.092) and (AOR = 1.793, p < 0.01; 95%CI: 0.985, 3.201) against when they would not be exposed to the training package respectively. Other variables were not significantly associated with the outcome variables as indicated in the table. Findings in Table illustrate the results from the paired t-test analysis model, which indicated that there was a statistically significant increase ( p < 0.01) in participants’ first aid management scores after the training with mean differences of M = 7.47 ± 2.70 (Pre-test) and M = 15.08 ± 5.34 (Post-test) at t = 22.860 (t-value) set at a degree of freedom (df: n = 119). With regard to Cohen’s d classifications of effect sizes , the effect size of first aid management training on pre-school teachers’ knowledge was significantly high (Cohen’s d = 1.80). On the other hand, post-test findings demonstrated a significant gain ( p < 0.01) in participants’ first aid management scores after the training with a mean difference of M = 11.45 ± 3.067 (Pre-test) and M = 26.99 ± 6.587 (Post-test) at 27.372 (t-value) set at a degree of freedom (df: n = 119). Based on Cohen’s d classifications of effect sizes, the effect size of first aid management training on pre-school teachers’ attitudes was significantly high (Cohen’s d = 3.02). Moreover, there was a significant increase ( p < 0.01) in participants’ first aid management scores of intention to practice first aid management to preschool children after the training with mean differences of M = 1.92 ± 1.553 (Pre-test) and M = 4.76 ± 0.648 (Post-test) at t = 8.808 (t-value) set a degree of freedom (df: n = 119). The effect size of first aid management training on an intentional practice was significantly high (Cohen’s d = 2.39) based on the classifications of effect sizes. To establish the association between participants’ sociodemographic characteristics profiles and First aid knowledge, attitude, and intention to practice the provision of first aid services to children, regression analysis was performed. As shown in Tables , , and the controlled odds of the first aid training influencing first aid knowledge, attitude, and intention to practice the provision of first aid services to preschool children among preschool teachers were significantly higher (AOR = 2.304; p < 0.01; 95%CI: 1.037, 5.939), (AOR = 1.039; p < 0.01; 95%CI: 0.658, 2.092) and (AOR = 1.793, p < 0.01; 95%CI: 0.985, 3.201) against when they would not be exposed to the training package respectively. Other variables were not significantly associated with the outcome variables as indicated in the table. The response and adherence rate of study participants to the intervention was successfully at 100%, which implies that there was no loss to follow-up from the baseline to the end-line assessments. This study found a positive effect of an intervention on preschool teachers’ knowledge, attitude, and intention to practice provision of the first aid services to preschool pupils. The effect may probably be linked with prescribed materials in the first aid guide that focused more on real-life pediatric injuries scenarios and practical sessions on the best strategies possible to address them. Needless to say, the timing and discussion model of facilitating the training would probably enhance inquiry and participatory learning among pre-school teachers in finding appropriate strategies for managing pediatric injuries in schools, which in turn would influence the effect of the intervention on pre-school teachers’ first aid knowledge, attitude and intention to provide first aid management to pre-school children. Moreover, the inclusion of trained research trainers who also had expertise in emergency care and or teaching would lead the intervention to be more feasible and effective in enhancing preschool teachers’ competencies in providing first aid services to preschool children. The number of sessions alongside their frequencies was prescribed at an interval that was defined to be enough for pre-school teachers to grasp new knowledge about first aid services and demonstrate it under minimal support from the trained researcher trainers. Thus, findings that may be observed in this study may imply that the delivery of pediatric first aid training among pre-school teachers demonstrates a significant effect on knowledge, attitude, and intention to practice the provision of first aid services to pre-school pupils in Pemba island. Moreover, the intervention seemed to influence interactive communication among pre-school teachers throughout the intervention, which is an essential component in any contemporary educational industry. In line with the findings of this study, the work of Bandyopadhyay et al ., and Hassan et al ., Shetie et al ., uncovered that a well-designed and implemented first aid health education and training program in the nature of social constructivism grounds improves knowledge and practice of Kindergarten teachers in addressing pediatric emergencies in schools. The similarity may probably be due to the resemblance in the study topic, study population, and or study settings. Moreover, Gharsan et al ., , suggested that first aid training among preschool teachers promises a healthy survival of children because of the expected competencies demonstrated by teachers after being trained on first aid management to pediatric injuries in schools. Their findings provide insight the same as the current study as multidisciplinary strategies in addressing pediatric injuries among pre-school pupils are very potential than merely leaving the task to health workers. Méndez , Li, Sheng, Zhang, Jiang, and Shen and Li, Jiang, Jin, Qiu, Shen added that it is very important to empower pre-school teachers with first-aid management competencies through well-structured and implemented first-aid training courses grounded in a participatory pedagogical nature because such intervention is the positive predictors of teachers’ first aid responsiveness competencies. Pre-school teachers are the proximal objects to pre-school pupils who need their close attention and timely response to health problems in schools. Empowering preschool teachers with appropriate and specific first aid management strategies may promise the decline of morbidity and mortalities caused by pediatric injuries among children in pre-schools. Although the findings of this study are limited to the employed self-reported questionnaire that does not measure the actual practices of providing pediatric first aid services to the pre-school pupils, it assessed the intention of the pre-school teachers, which is the primary drive to the practices. The actual practices would have observed trends of events to see how teachers provided pediatric first aid services to document reduced effects of injuries among pre-school pupils within a period. Nevertheless, the study was constrained by including an adequate number of experts such as curriculum developers, clinical experts in emergency and critical care, and educators who would be consulted and provide constructive information during the development and prototyping of pediatric first aid training materials. Teachers’ Resource Centers require adequate non-human resources such as transport alongside its fairs to the principal investigator and research trainers. However, the current study was constrained to efficient and consistent transport, as it was a rainy season that led to some delays in the timely commencement of some sessions. In implementing pediatric first aid training as in this study, the researcher needs to establish close and frequent consultations with the experts in curricula or program development, meet with educational and health stakeholders to sharpen ideas and pedagogies of the materials, and spare sometimes for prototyping the materials before the main study. The most prominent challenge was to ensure that research trainers adhered to the prescribed pedagogical knowledge and content in the pediatric first aid materials, as there were no fixed cameras to monitor them if they would not integrate other pedagogies, which were not prescribed in the training guide. However, the research team established frequent un-notified visits to the training venues to monitor, support, and emphasize adherence of the trainers to the guide. The first aid training for pre-school teachers was valid and feasible in Zanzibar. The end-line findings provide robust evidence that first-aid training holds a potential effect on enhancing knowledge, attitude, and intention to practice the provision of first-aid services to pre-school pupils. The training may be considered an alternative pedagogical approach that might advocate multidisciplinary strategies in ending pediatric injuries among pre-school children in Zanzibar. However, its preparation needs prototyping and frequent and consistent expert consultations to define what and how to organize teaching and learning experiences, define the length of sessions and duration of courses, frequency of sessions, and timing for evaluating it. This study recommends regular professional development training programs on managing pediatric injuries in schools among preschool teachers in Zanzibar to ensure that children’s health is secure enough for their healthy adulthood and future investment. |
A Comprehensive Analysis of COVID-19 Misinformation, Public Health Impacts, and Communication Strategies: Scoping Review | b99400c2-670b-4b8c-b350-52c783f49fea | 11375383 | Health Communication[mh] | Background The COVID-19 pandemic, a health crisis of unprecedented scale in the 21st century, was accompanied by an equally significant and dangerous phenomenon—an infodemic . The World Health Organization defines an infodemic as the rapid spread and overabundance of information—both accurate and false—that occurs during an epidemic . A tidal wave of misinformation, disinformation, and rumors characterized the infodemic during the COVID-19 pandemic. This led to widespread confusion, mistrust in health authorities, noncompliance with health guidelines, and even risky health behaviors . Moreover, the role of political leaders in shaping the narrative around COVID-19 policies significantly influenced these dynamics. In countries such as the United States, Brazil, and Turkey, the intersection of political ideology and crisis management led to increased societal polarization. Leaders in these nations used communication strategies ranging from denying the severity of the pandemic to promoting unproven treatments . This complex interplay between leadership communication and public response underscores the critical need for science-based policy communication and the responsible use of social media platforms to combat misinformation and foster societal unity in the face of a global health crisis. Furthermore, the emergence of the COVID-19 infodemic highlighted the crucial role of social media literacy in combating misinformation. Educating the public on discerning credible information on the web has emerged as a pivotal strategy for mitigating the spread of misinformation and its consequences . Misinformation during public health crises has been a recurring problem. Historical examples from the Ebola outbreak, such as rumors that the virus was a government creation or that certain local practices could cure the disease, highlight how misinformation can hinder public health responses . False beliefs, such as that drinking salt water would cure Ebola or that the disease was spread through the air, led to a mistrust of health workers and avoidance of treatment centers, exacerbating the crisis . In the context of COVID-19, misinformation was particularly pervasive, with false claims about the effectiveness of various nostrums, leading to panic buying and shortages . The impact of such misinformation varied across regions . These dynamics were often fueled by psychological and social factors, including fear, uncertainty, and the reinforcing nature of social media algorithms, which created echo chambers of false information . The wide-ranging consequences affected not only immediate health behaviors but also the trust in, and response to, public health authorities . Misinformation during a public health crisis is nothing new. However, the scale and speed at which misinformation spread during the COVID-19 pandemic are unparalleled. This situation was exacerbated by the widespread use of social media and the internet, where rumors can rapidly reach large audiences . This spread of misinformation had far-reaching consequences: it undermined public health efforts, promoted harmful practices, contributed to vaccine hesitancy, and possibly prolonged the pandemic . These effects went beyond individual health behaviors; they influenced public health policies and diminished trust in health authorities and the scientific community . In light of these challenges, the machine learning–enhanced graph analytics (MEGA) framework has emerged as a novel approach to managing infodemics by leveraging the power of machine learning and graph analytics. This framework offers a robust method for detecting spambots and influential spreaders in social media networks, which is crucial for assessing and mitigating the risks associated with infodemics. Such advanced tools are essential for public health officials and policy makers to navigate the complex landscape of misinformation and to develop more effective communication strategies . Furthermore, combating this infodemic necessitates a strategic approach encapsulating the “Four Pillars of Infodemic Management”: (1) monitoring information (infoveillance) to track the spread and impact of misinformation; (2) enhancing eHealth literacy and science literacy, empowering individuals to evaluate information critically; (3) refining knowledge quality through processes such as fact checking and peer review, ensuring the reliability of information; and (4) ensuring timely and accurate knowledge translation, minimizing the distortion by political or commercial interests . These measures are essential for mitigating the impact of misinformation and guiding the public and professionals toward quality health information during the pandemic and beyond. The COVID-19 pandemic has highlighted the need for improved public health communication and preparedness strategies, particularly in countering misinformation to prevent similar challenges in future health crises . Pertinent Questions Recognizing the unique challenges posed by the COVID-19 infodemic, this comprehensive scoping review seeks to systematically explore various dimensions of misinformation related to the pandemic. Our investigation is informed by a critical analysis of existing literature, noting a gap in studies that collectively examine the themes, sources, target audiences, impacts, interventions, and effectiveness of public health communication strategies against COVID-19 misinformation. To the best of our knowledge, this is the first review that attempts to bridge this gap by providing a comprehensive and integrated analysis of these key dimensions. While individual aspects of misinformation have been addressed in prior research, there lacks a comprehensive review that integrates these components to offer a holistic understanding necessary for effective countermeasures. Therefore, our review is structured around four pertinent questions, each carefully selected for their significance in advancing our understanding of the COVID-19 infodemic and its counteraction: What is the extent of COVID-19 misinformation? How can it be addressed? What are the primary sources of COVID-19 misinformation? Which target audiences are most affected by COVID-19 misinformation? What public health communication strategies are being used to combat COVID-19 misinformation? These questions were selected to emphasize critical areas of COVID-19 misinformation that, when addressed, can significantly contribute to bridging technical and knowledge gaps in our response to current and future public health emergencies. By detailing our study’s contributions to existing literature, we aim to present distinctive understandings crucial for policy makers, health professionals, and the public in effectively addressing misinformation challenges. The COVID-19 pandemic, a health crisis of unprecedented scale in the 21st century, was accompanied by an equally significant and dangerous phenomenon—an infodemic . The World Health Organization defines an infodemic as the rapid spread and overabundance of information—both accurate and false—that occurs during an epidemic . A tidal wave of misinformation, disinformation, and rumors characterized the infodemic during the COVID-19 pandemic. This led to widespread confusion, mistrust in health authorities, noncompliance with health guidelines, and even risky health behaviors . Moreover, the role of political leaders in shaping the narrative around COVID-19 policies significantly influenced these dynamics. In countries such as the United States, Brazil, and Turkey, the intersection of political ideology and crisis management led to increased societal polarization. Leaders in these nations used communication strategies ranging from denying the severity of the pandemic to promoting unproven treatments . This complex interplay between leadership communication and public response underscores the critical need for science-based policy communication and the responsible use of social media platforms to combat misinformation and foster societal unity in the face of a global health crisis. Furthermore, the emergence of the COVID-19 infodemic highlighted the crucial role of social media literacy in combating misinformation. Educating the public on discerning credible information on the web has emerged as a pivotal strategy for mitigating the spread of misinformation and its consequences . Misinformation during public health crises has been a recurring problem. Historical examples from the Ebola outbreak, such as rumors that the virus was a government creation or that certain local practices could cure the disease, highlight how misinformation can hinder public health responses . False beliefs, such as that drinking salt water would cure Ebola or that the disease was spread through the air, led to a mistrust of health workers and avoidance of treatment centers, exacerbating the crisis . In the context of COVID-19, misinformation was particularly pervasive, with false claims about the effectiveness of various nostrums, leading to panic buying and shortages . The impact of such misinformation varied across regions . These dynamics were often fueled by psychological and social factors, including fear, uncertainty, and the reinforcing nature of social media algorithms, which created echo chambers of false information . The wide-ranging consequences affected not only immediate health behaviors but also the trust in, and response to, public health authorities . Misinformation during a public health crisis is nothing new. However, the scale and speed at which misinformation spread during the COVID-19 pandemic are unparalleled. This situation was exacerbated by the widespread use of social media and the internet, where rumors can rapidly reach large audiences . This spread of misinformation had far-reaching consequences: it undermined public health efforts, promoted harmful practices, contributed to vaccine hesitancy, and possibly prolonged the pandemic . These effects went beyond individual health behaviors; they influenced public health policies and diminished trust in health authorities and the scientific community . In light of these challenges, the machine learning–enhanced graph analytics (MEGA) framework has emerged as a novel approach to managing infodemics by leveraging the power of machine learning and graph analytics. This framework offers a robust method for detecting spambots and influential spreaders in social media networks, which is crucial for assessing and mitigating the risks associated with infodemics. Such advanced tools are essential for public health officials and policy makers to navigate the complex landscape of misinformation and to develop more effective communication strategies . Furthermore, combating this infodemic necessitates a strategic approach encapsulating the “Four Pillars of Infodemic Management”: (1) monitoring information (infoveillance) to track the spread and impact of misinformation; (2) enhancing eHealth literacy and science literacy, empowering individuals to evaluate information critically; (3) refining knowledge quality through processes such as fact checking and peer review, ensuring the reliability of information; and (4) ensuring timely and accurate knowledge translation, minimizing the distortion by political or commercial interests . These measures are essential for mitigating the impact of misinformation and guiding the public and professionals toward quality health information during the pandemic and beyond. The COVID-19 pandemic has highlighted the need for improved public health communication and preparedness strategies, particularly in countering misinformation to prevent similar challenges in future health crises . Recognizing the unique challenges posed by the COVID-19 infodemic, this comprehensive scoping review seeks to systematically explore various dimensions of misinformation related to the pandemic. Our investigation is informed by a critical analysis of existing literature, noting a gap in studies that collectively examine the themes, sources, target audiences, impacts, interventions, and effectiveness of public health communication strategies against COVID-19 misinformation. To the best of our knowledge, this is the first review that attempts to bridge this gap by providing a comprehensive and integrated analysis of these key dimensions. While individual aspects of misinformation have been addressed in prior research, there lacks a comprehensive review that integrates these components to offer a holistic understanding necessary for effective countermeasures. Therefore, our review is structured around four pertinent questions, each carefully selected for their significance in advancing our understanding of the COVID-19 infodemic and its counteraction: What is the extent of COVID-19 misinformation? How can it be addressed? What are the primary sources of COVID-19 misinformation? Which target audiences are most affected by COVID-19 misinformation? What public health communication strategies are being used to combat COVID-19 misinformation? These questions were selected to emphasize critical areas of COVID-19 misinformation that, when addressed, can significantly contribute to bridging technical and knowledge gaps in our response to current and future public health emergencies. By detailing our study’s contributions to existing literature, we aim to present distinctive understandings crucial for policy makers, health professionals, and the public in effectively addressing misinformation challenges. This scoping review was conducted following the methodology framework defined by Arksey and O’Malley and elaborated upon by Levac et al . This framework, recognized for its systematic approach, involves five stages: (1) defining the research question; (2) identifying relevant studies; (3) selecting appropriate literature; (4) charting the data; and (5) collating, summarizing, and reporting the results. Databases and Search Strategies The literature search targeted 3 major databases: MEDLINE (PubMed), Embase, and Scopus. These databases were selected for their comprehensive coverage of medical, health, and social science literature. The search strategy was crafted using a combination of keywords and subject headings related to COVID-19, misinformation, and public health communication. We used (“COVID-19” OR “SARS-CoV-2” OR “Coronavirus”) AND (“Misinformation” OR “Disinformation” OR “Fake news” OR “Infodemic”) AND (“Public health outcomes” OR “Health impacts”) AND (“Communication strategies” OR “Public health communication”). Eligibility Criteria The inclusion and exclusion criteria are presented in . Inclusion and exclusion criteria. Inclusion criteria Article type: peer-reviewed studies Language: published in English Publication date: published between December 1, 2019, and September 30, 2023 Focus: addresses COVID-19 misinformation and its sources, themes, and target audiences, as well as the effectiveness of public health communication strategies Study design: empirical studies (eg, cross-sectional, observational, randomized controlled trials, qualitative, and mixed methods) Exclusion criteria Article type: non–peer-reviewed articles, opinion pieces, and editorials Language: published in languages other than English Publication date: published before December 1, 2019, or after September 30, 2023 Focus: does not address COVID-19 misinformation or its related aspects Study design: case studies and anecdotal reports Study Selection Process The study selection process involved an initial screening of titles and abstracts to eliminate irrelevant studies, followed by a thorough full-text review of the remaining articles. This critical stage was conducted by the authors, each with expertise in public health communication and health services research, thereby enhancing the thoroughness and reliability of the selection process. In cases of disagreement, the reviewers engaged in discussions until a consensus was reached on the inclusion of each article. In addition, we adhered to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines to enhance the thoroughness and transparency of our review (see for the PRISMA-ScR checklist). The literature search targeted 3 major databases: MEDLINE (PubMed), Embase, and Scopus. These databases were selected for their comprehensive coverage of medical, health, and social science literature. The search strategy was crafted using a combination of keywords and subject headings related to COVID-19, misinformation, and public health communication. We used (“COVID-19” OR “SARS-CoV-2” OR “Coronavirus”) AND (“Misinformation” OR “Disinformation” OR “Fake news” OR “Infodemic”) AND (“Public health outcomes” OR “Health impacts”) AND (“Communication strategies” OR “Public health communication”). The inclusion and exclusion criteria are presented in . Inclusion and exclusion criteria. Inclusion criteria Article type: peer-reviewed studies Language: published in English Publication date: published between December 1, 2019, and September 30, 2023 Focus: addresses COVID-19 misinformation and its sources, themes, and target audiences, as well as the effectiveness of public health communication strategies Study design: empirical studies (eg, cross-sectional, observational, randomized controlled trials, qualitative, and mixed methods) Exclusion criteria Article type: non–peer-reviewed articles, opinion pieces, and editorials Language: published in languages other than English Publication date: published before December 1, 2019, or after September 30, 2023 Focus: does not address COVID-19 misinformation or its related aspects Study design: case studies and anecdotal reports The study selection process involved an initial screening of titles and abstracts to eliminate irrelevant studies, followed by a thorough full-text review of the remaining articles. This critical stage was conducted by the authors, each with expertise in public health communication and health services research, thereby enhancing the thoroughness and reliability of the selection process. In cases of disagreement, the reviewers engaged in discussions until a consensus was reached on the inclusion of each article. In addition, we adhered to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines to enhance the thoroughness and transparency of our review (see for the PRISMA-ScR checklist). Overview A total of 390 articles were identified from the 3 databases, of which, after removing 134 (34.4%) duplicates, 256 (65.6%) articles remained. Of these 256 articles, 69 (27%) were selected based on abstract searches. Of the 69 full-text articles, 27 (39%) were assessed for eligibility. Of these 27 studies, 21 (78%) were included in the scoping review . This analysis of the 21 studies provides a comprehensive overview of the many impacts of misinformation during the COVID-19 pandemic, including its characteristics, themes, sources, effects, and public health communication strategies. Study Characteristics The included studies exhibited considerable diversity in terms of their methodologies, geographic focus, and objectives . Verma et al conducted a large-scale observational study in the United States, analyzing social media data from >76,000 users of Twitter (subsequently rebranded X) to establish a causal link between misinformation sharing and increased anxiety. By contrast, Loomba et al carried out a randomized controlled trial in both the United Kingdom and the United States to examine the impact of misinformation on COVID-19 vaccination intent across different sociodemographic groups. In the United States, Bokemper et al used randomized trials to assess the efficacy of various public health messages in promoting social distancing. Xue et al used observational methods to explore public attitudes toward COVID-19 vaccines and the role of fact-checking information on social media. These studies collectively used quantitative analysis, web-based surveys, cross-sectional studies, and social network analysis, reflecting the diversity of research approaches. Sample sizes ranged from hundreds to tens of thousands of participants, providing a broad view of the infodemic’s impact. Notably, most of the studies (17/21, 81%) were conducted on the web, underlining the infodemic’s digital nature. The outcomes assessed various public health aspects, including mental health, communication effectiveness, and behavior change. Kumar et al used social network and topic modeling analyses to gain insights into public perceptions on Reddit, contributing to the methodological diversity within the reviewed literature. Misinformation Themes and Sources Misinformation Themes The results of the studies reported many themes that presented a diverse and interconnected landscape of COVID-19 misinformation. A significant amount of this misinformation related to the virus’s origins and transmission, with theories varying from accidental laboratory releases to purported links with 5G technology. These theories often reflected a tendency to misinterpret scientific data or attribute the pandemic to external and frequently sensational causes . A significant proportion of misinformation concerned treatments and preventives for COVID-19, where unscientific remedies (accidental or deliberate) and vitamin supplements were touted as effective . This was coupled with widespread misconceptions and conspiracy theories about COVID-19 vaccines . Public health measures such as the effectiveness of masks and social distancing were often mischaracterized or misrepresented, sometimes due to political and economic theories . Social media played a significant role in amplifying dangerous beliefs and practices . The studies demonstrate that misinformation during the pandemic ranged from basic misunderstandings to elaborate conspiracy theories . Sources of Misinformation The studies provide a comprehensive analysis of the various sources of COVID-19 misinformation, with a particular focus on social media platforms such as Facebook, WhatsApp, Twitter, Reddit, and YouTube, which were repeatedly identified as primary channels for spreading false information . These platforms not only facilitated the spread of misinformation through user-generated content but also through public figures and political leaders, whose remarks often fueled rumors and unsubstantiated claims . Traditional media sources, including television, newspapers, and radio, also added to the misinformation landscape, either by directly spreading false information or by passing on misleading statements and rumors . The influence of informal networks, such as family, friends, and community gatherings, was highlighted, pointing to the significance of word-of-mouth communication in the dissemination of misinformation . Furthermore, the studies identified specific web-based communities and forums, such as Facebook groups and subreddits, where misinformation was not only shared but also reinforced within echo chambers . Target Audience of Misinformation The selected studies revealed a complex landscape of COVID-19 misinformation targeting diverse audiences, with a significant focus on the general public across countries; for instance, Datta et al and Hou et al identified both health care professionals and the broader global population, including those in China, the United States, and countries with traditional medicine practices, as key recipients of misinformation . Susceptibility to misinformation was also observed in individuals with low health literacy, depression, or susceptibility to conspiracy theories or vaccine-hesitant individuals and those with a mistrust of vaccines . Digital platforms played a significant role in shaping public perceptions, with studies highlighting the impact of misinformation on social media users, online forum participants, and those engaging with user-generated content . Moreover, specific populations such as Serbian adults, American women, racial minority individuals, students, public health professionals, and essential workers were reported as being particularly affected or targeted by misinformation campaigns . Impacts of Misinformation on Public Health Outcomes Identified Negative Impact The findings presented many negative effects of misinformation on public health . One primary consequence was the impact on health care professionals, who faced challenges in discerning accurate information, leading to disruptions in routine decision-making and care practices . The public was also affected, with misdirected responses and increased reliance on unproven remedies, indicating missed opportunities for effective epidemic control . Misinformation significantly disrupted health and risk communication, contributing to social unrest and heightened anxiety . It also directly impacted public health measures, as evidenced by lower intent to accept COVID-19 vaccines , reduced adherence to official health guidelines , and noncompliance with basic preventive measures such as handwashing . The spread of misinformation resulted in decreased public trust in science , undermining the effectiveness of public health messaging and leading to increased vaccine hesitancy . This hesitancy was further exacerbated by the promotion of antivaccine propaganda, posing a barrier to achieving herd immunity . The extent of the impact of misinformation was also evident in the public’s mental health, with reports of increased anxiety, suicidal thoughts, and distress , as well as in overall public attitudes toward the pandemic and changes in public attitudes toward vaccines, which became increasingly negative over time . Measured Outcomes The studies highlighted the challenges that individuals and communities faced in navigating the pandemic amid a flood of misinformation . It was reported that misinformation significantly impacted health care professionals, leading to discomfort, distraction, and difficulty in discerning accurate information. This impact affected decision-making and routine practices . The public’s response was manifested by changes in search behaviors and purchasing patterns, reflecting the influence of rumors and celebrity endorsements . It was reported that “fake news” significantly affected the information landscape, skewing the perception of truth versus lies . Hesitancy was reported in intent to receive COVID-19 vaccines across demographic groups . The misinformation also altered health behaviors, such as handwashing and the use of disinfectants, and influenced preventive behavioral intentions . It was also reported that misinformation affected public adherence to COVID-19 prevention, risk avoidance behaviors, and vaccination intentions . The communication strategies during quarantine, public trust and engagement with authorities, and compliance with quarantine measures were influenced by the level of concern, which was shaped by misinformation . It was reported that misinformation led to changes in social distancing and mask wearing . Social media platforms exhibited a prevalence of antivaccine content and a focus on misinformation in web-based discussions . The studies also reported that emotional and linguistic features in vaccine-related posts influenced public attitudes toward vaccines, reflecting the impact of different information sources . Anxiety levels were heightened due to exposure to misinformation, especially among specific demographic groups . Some of the studies (2/21, 10%) found that misinformation affected public trust in health experts and government and altered the perceived severity of COVID-19 . Potential Contributing Factors The studies identified a wide array of factors that contributed to the spread of misinformation during the pandemic . Key among these were social media and connections with family and friends, which hastened the spread of unregulated information . The issue was further compounded by delayed and nontransparent communication from health authorities, coupled with the absence of early, authoritative responses . Cognitive biases, a lack of digital and health literacy, and the exploitation of social divisions also played significant roles . Factors such as sociodemographic characteristics, trust in information sources, the frequency of social media use, and the nature of misinformation were important . The spread of misinformation was also influenced by gender, education level, and the distinction between urban and rural living , as well as age, the effectiveness of media channels, the initial understanding of SARS-CoV-2, and trust in authorities, particularly in relation to quarantine measures . Contributing factors included beliefs in conspiracy theories, cognitive intuition, an overestimation of COVID-19 knowledge, and susceptibility to cognitive biases , alongside political orientation and religious commitment . Public behavior was also shaped by concerns about government infringement on personal freedoms . Finally, exposure to fake news and conspiracy stories , cultural attitudes toward government mandates, and the spread of misinformation through social media were noted . Public Health Communication Strategies and Their Effectiveness Intervention Strategies The studies highlighted the critical role of effective public health communication strategies in addressing COVID-19 misinformation . This included a range of approaches such as enhancing health literacy and reinforcing social media policies against fake news , along with using fact checking and empathetic communication to debunk misinformation . The importance of timely and accurate information dissemination, particularly through social media, was also noted as a crucial component for authoritative communication . In addition, several studies advocated for tailored communication approaches. These approaches involve targeting specific misinformed subgroups , using infographics to clarify scientific processes , and focusing on community protection while reframing reckless behaviors . Essential strategies included training health care professionals to accurately identify credible information, alongside implementing media literacy campaigns and prioritizing groups considered vulnerable in public communication . Engaging skeptics, particularly vaccine skeptics, through interventions was reported as essential , with an emphasis on debunking misinformation, promoting credible information sources, and reducing exposure to misinformation . Intervention Methods The included studies reported various intervention methods to combat misinformation. Key strategies included the use of credible sources , the implementation of targeted campaigns, and the integration of digital technologies such as social media tools and algorithmic analyses . Educational efforts, ranging from basic loudspeaker announcements to sophisticated web-based educational tools and infographics, were also reported to be effective . The importance of engaging the public through surveys, randomized interventions, and peer discussions was noted . Fact checking, in partnership with third-party organizations and through internal processes, was highlighted as crucial, along with the need for empathetic communication . Finally, some of the studies (2/21, 10%) showed the importance of identifying predictors and using analytical models to refine strategies and better understand public sentiment . Platform or Channel for Communication The studies reported that a diverse array of platforms and channels played a crucial role in effective communication during the COVID-19 pandemic . Digital and social media platforms, such as Facebook, Reddit, and YouTube, were extensively used to disseminate facts and counter misinformation, as noted by numerous studies (8/21, 38%) . Government websites and official channels, alongside health care settings, were also acknowledged for their value in providing reliable and accurate information . Traditional media forms, including television, radio, and print, were found to be crucial in reaching wide audiences . Web-based platforms designed for research and surveys, such as Prolific, played a key role in gauging public perceptions and addressing misinformation . Furthermore, community networks and personal communications were identified as essential, particularly in village health volunteer networks and through engagement with health professionals and academics, demonstrating remarkable effectiveness in local communities and areas with limited digital access . Effectiveness Metrics and Reported Effectiveness In studies on public health communication during the pandemic, effectiveness metrics focused on reducing misinformation and improving health behaviors . Detailed engagement metrics included tracking interactions with verified versus fake news, changes in vaccination intent, and shifts in public attitudes toward vaccines over time . Unique metrics such as internet search trends correlating with public behavior, adherence to health guidelines, and the impact of misinformation on mental health were also explored . Studies such as that by Gruzd et al analyzed social media for misinformation removal and provaccine content. The reported effectiveness of interventions such as fact checking and clear communication varied across the studies, influencing vaccine attitudes and trust in science to varying degrees . Some of the studies (8/21, 38%) pointed to increased public support for measures such as quarantine, emphasizing the role of community engagement , but also noted challenges in maintaining long-term effectiveness and addressing various reactions such as anxiety in response to misinformation . These studies, often based on computational analyses, existing literature, and theoretical models, highlighted the complex, multifaceted nature of public health communication during the pandemic . Recommendations, Gaps, and Future Directions Recommendations for Addressing COVID-19 Misinformation The included studies recommended a comprehensive approach that included strategic public health communication, educational initiatives, and policy adaptation . Key themes included effective information regulation and enhancing discernment skills among health care professionals as well as the general public , while strategies included considering platform-specific and demographic-focused approaches to combat misinformation . Governmental leadership and international coordination were considered crucial , and educational strategies were recommended to focus on improving health literacy and researching misinformation inoculation . Public health messaging and web-based moderation policies were deemed effective , and technological interventions and comprehensive policy making were recommended . Methodological research to understand extended debates and debunking techniques was emphasized , as well as tailored communication and messaging strategies . Identified Gaps in Addressing Misinformation The studies highlighted several gaps in managing COVID-19 misinformation and public health communication. Challenges included distinguishing authentic information from misinformation, the persistence of fake news, and the presence of echo chambers in social media networks . Timely, actionable advice for personal protection and effective risk communication during the early stages of the pandemic was lacking . Research limitations included a lack of real-world simulation, leading to challenges in generalizability . There was insufficient understanding of the role of health authorities as trusted sources, media preference during crises, and the effectiveness of information dissemination in different regions . Challenges arising from legal and ethical considerations, resource limitations, disparities in education access, and insufficient exploration of the relationship between misinformation and vaccine acceptance were also noted . Proposed Future Research and Actions Future research directions included developing guidelines for medical information dissemination, enhancing crisis communication skills among health care professionals, and creating targeted interventions based on demographics . Evaluating the impact of governmental and international organization communications, conducting research within social media settings, and analyzing the impact of misinformation more accurately were recommended . Studying media habits during crises, examining long-term behavioral changes after quarantine, and dissecting the influential aspects of messages were suggested . Investigating psychological factors, evaluating emotional appeals in health communication, and developing strategies for credible sources to enhance their social media influence were proposed . Ethically and legally compliant technological interventions, efficient resource allocation policies, and extensive studies on psychological impacts were recommended . Mourali and Drake proposed quantifying extended debates, studying message elements and sources, and exploring “prebunking.” Longitudinal studies, research on user engagement with social media content, and interventions to mitigate misinformation effects were highlighted . Finally, the studies suggested a holistic approach involving collaboration among companies, governments, and users; continuous monitoring of misinformation trends; regular fact checking; legal actions against sources of misinformation; and specific communications to debunk myths . A total of 390 articles were identified from the 3 databases, of which, after removing 134 (34.4%) duplicates, 256 (65.6%) articles remained. Of these 256 articles, 69 (27%) were selected based on abstract searches. Of the 69 full-text articles, 27 (39%) were assessed for eligibility. Of these 27 studies, 21 (78%) were included in the scoping review . This analysis of the 21 studies provides a comprehensive overview of the many impacts of misinformation during the COVID-19 pandemic, including its characteristics, themes, sources, effects, and public health communication strategies. The included studies exhibited considerable diversity in terms of their methodologies, geographic focus, and objectives . Verma et al conducted a large-scale observational study in the United States, analyzing social media data from >76,000 users of Twitter (subsequently rebranded X) to establish a causal link between misinformation sharing and increased anxiety. By contrast, Loomba et al carried out a randomized controlled trial in both the United Kingdom and the United States to examine the impact of misinformation on COVID-19 vaccination intent across different sociodemographic groups. In the United States, Bokemper et al used randomized trials to assess the efficacy of various public health messages in promoting social distancing. Xue et al used observational methods to explore public attitudes toward COVID-19 vaccines and the role of fact-checking information on social media. These studies collectively used quantitative analysis, web-based surveys, cross-sectional studies, and social network analysis, reflecting the diversity of research approaches. Sample sizes ranged from hundreds to tens of thousands of participants, providing a broad view of the infodemic’s impact. Notably, most of the studies (17/21, 81%) were conducted on the web, underlining the infodemic’s digital nature. The outcomes assessed various public health aspects, including mental health, communication effectiveness, and behavior change. Kumar et al used social network and topic modeling analyses to gain insights into public perceptions on Reddit, contributing to the methodological diversity within the reviewed literature. Misinformation Themes The results of the studies reported many themes that presented a diverse and interconnected landscape of COVID-19 misinformation. A significant amount of this misinformation related to the virus’s origins and transmission, with theories varying from accidental laboratory releases to purported links with 5G technology. These theories often reflected a tendency to misinterpret scientific data or attribute the pandemic to external and frequently sensational causes . A significant proportion of misinformation concerned treatments and preventives for COVID-19, where unscientific remedies (accidental or deliberate) and vitamin supplements were touted as effective . This was coupled with widespread misconceptions and conspiracy theories about COVID-19 vaccines . Public health measures such as the effectiveness of masks and social distancing were often mischaracterized or misrepresented, sometimes due to political and economic theories . Social media played a significant role in amplifying dangerous beliefs and practices . The studies demonstrate that misinformation during the pandemic ranged from basic misunderstandings to elaborate conspiracy theories . Sources of Misinformation The studies provide a comprehensive analysis of the various sources of COVID-19 misinformation, with a particular focus on social media platforms such as Facebook, WhatsApp, Twitter, Reddit, and YouTube, which were repeatedly identified as primary channels for spreading false information . These platforms not only facilitated the spread of misinformation through user-generated content but also through public figures and political leaders, whose remarks often fueled rumors and unsubstantiated claims . Traditional media sources, including television, newspapers, and radio, also added to the misinformation landscape, either by directly spreading false information or by passing on misleading statements and rumors . The influence of informal networks, such as family, friends, and community gatherings, was highlighted, pointing to the significance of word-of-mouth communication in the dissemination of misinformation . Furthermore, the studies identified specific web-based communities and forums, such as Facebook groups and subreddits, where misinformation was not only shared but also reinforced within echo chambers . Target Audience of Misinformation The selected studies revealed a complex landscape of COVID-19 misinformation targeting diverse audiences, with a significant focus on the general public across countries; for instance, Datta et al and Hou et al identified both health care professionals and the broader global population, including those in China, the United States, and countries with traditional medicine practices, as key recipients of misinformation . Susceptibility to misinformation was also observed in individuals with low health literacy, depression, or susceptibility to conspiracy theories or vaccine-hesitant individuals and those with a mistrust of vaccines . Digital platforms played a significant role in shaping public perceptions, with studies highlighting the impact of misinformation on social media users, online forum participants, and those engaging with user-generated content . Moreover, specific populations such as Serbian adults, American women, racial minority individuals, students, public health professionals, and essential workers were reported as being particularly affected or targeted by misinformation campaigns . The results of the studies reported many themes that presented a diverse and interconnected landscape of COVID-19 misinformation. A significant amount of this misinformation related to the virus’s origins and transmission, with theories varying from accidental laboratory releases to purported links with 5G technology. These theories often reflected a tendency to misinterpret scientific data or attribute the pandemic to external and frequently sensational causes . A significant proportion of misinformation concerned treatments and preventives for COVID-19, where unscientific remedies (accidental or deliberate) and vitamin supplements were touted as effective . This was coupled with widespread misconceptions and conspiracy theories about COVID-19 vaccines . Public health measures such as the effectiveness of masks and social distancing were often mischaracterized or misrepresented, sometimes due to political and economic theories . Social media played a significant role in amplifying dangerous beliefs and practices . The studies demonstrate that misinformation during the pandemic ranged from basic misunderstandings to elaborate conspiracy theories . The studies provide a comprehensive analysis of the various sources of COVID-19 misinformation, with a particular focus on social media platforms such as Facebook, WhatsApp, Twitter, Reddit, and YouTube, which were repeatedly identified as primary channels for spreading false information . These platforms not only facilitated the spread of misinformation through user-generated content but also through public figures and political leaders, whose remarks often fueled rumors and unsubstantiated claims . Traditional media sources, including television, newspapers, and radio, also added to the misinformation landscape, either by directly spreading false information or by passing on misleading statements and rumors . The influence of informal networks, such as family, friends, and community gatherings, was highlighted, pointing to the significance of word-of-mouth communication in the dissemination of misinformation . Furthermore, the studies identified specific web-based communities and forums, such as Facebook groups and subreddits, where misinformation was not only shared but also reinforced within echo chambers . The selected studies revealed a complex landscape of COVID-19 misinformation targeting diverse audiences, with a significant focus on the general public across countries; for instance, Datta et al and Hou et al identified both health care professionals and the broader global population, including those in China, the United States, and countries with traditional medicine practices, as key recipients of misinformation . Susceptibility to misinformation was also observed in individuals with low health literacy, depression, or susceptibility to conspiracy theories or vaccine-hesitant individuals and those with a mistrust of vaccines . Digital platforms played a significant role in shaping public perceptions, with studies highlighting the impact of misinformation on social media users, online forum participants, and those engaging with user-generated content . Moreover, specific populations such as Serbian adults, American women, racial minority individuals, students, public health professionals, and essential workers were reported as being particularly affected or targeted by misinformation campaigns . Identified Negative Impact The findings presented many negative effects of misinformation on public health . One primary consequence was the impact on health care professionals, who faced challenges in discerning accurate information, leading to disruptions in routine decision-making and care practices . The public was also affected, with misdirected responses and increased reliance on unproven remedies, indicating missed opportunities for effective epidemic control . Misinformation significantly disrupted health and risk communication, contributing to social unrest and heightened anxiety . It also directly impacted public health measures, as evidenced by lower intent to accept COVID-19 vaccines , reduced adherence to official health guidelines , and noncompliance with basic preventive measures such as handwashing . The spread of misinformation resulted in decreased public trust in science , undermining the effectiveness of public health messaging and leading to increased vaccine hesitancy . This hesitancy was further exacerbated by the promotion of antivaccine propaganda, posing a barrier to achieving herd immunity . The extent of the impact of misinformation was also evident in the public’s mental health, with reports of increased anxiety, suicidal thoughts, and distress , as well as in overall public attitudes toward the pandemic and changes in public attitudes toward vaccines, which became increasingly negative over time . Measured Outcomes The studies highlighted the challenges that individuals and communities faced in navigating the pandemic amid a flood of misinformation . It was reported that misinformation significantly impacted health care professionals, leading to discomfort, distraction, and difficulty in discerning accurate information. This impact affected decision-making and routine practices . The public’s response was manifested by changes in search behaviors and purchasing patterns, reflecting the influence of rumors and celebrity endorsements . It was reported that “fake news” significantly affected the information landscape, skewing the perception of truth versus lies . Hesitancy was reported in intent to receive COVID-19 vaccines across demographic groups . The misinformation also altered health behaviors, such as handwashing and the use of disinfectants, and influenced preventive behavioral intentions . It was also reported that misinformation affected public adherence to COVID-19 prevention, risk avoidance behaviors, and vaccination intentions . The communication strategies during quarantine, public trust and engagement with authorities, and compliance with quarantine measures were influenced by the level of concern, which was shaped by misinformation . It was reported that misinformation led to changes in social distancing and mask wearing . Social media platforms exhibited a prevalence of antivaccine content and a focus on misinformation in web-based discussions . The studies also reported that emotional and linguistic features in vaccine-related posts influenced public attitudes toward vaccines, reflecting the impact of different information sources . Anxiety levels were heightened due to exposure to misinformation, especially among specific demographic groups . Some of the studies (2/21, 10%) found that misinformation affected public trust in health experts and government and altered the perceived severity of COVID-19 . Potential Contributing Factors The studies identified a wide array of factors that contributed to the spread of misinformation during the pandemic . Key among these were social media and connections with family and friends, which hastened the spread of unregulated information . The issue was further compounded by delayed and nontransparent communication from health authorities, coupled with the absence of early, authoritative responses . Cognitive biases, a lack of digital and health literacy, and the exploitation of social divisions also played significant roles . Factors such as sociodemographic characteristics, trust in information sources, the frequency of social media use, and the nature of misinformation were important . The spread of misinformation was also influenced by gender, education level, and the distinction between urban and rural living , as well as age, the effectiveness of media channels, the initial understanding of SARS-CoV-2, and trust in authorities, particularly in relation to quarantine measures . Contributing factors included beliefs in conspiracy theories, cognitive intuition, an overestimation of COVID-19 knowledge, and susceptibility to cognitive biases , alongside political orientation and religious commitment . Public behavior was also shaped by concerns about government infringement on personal freedoms . Finally, exposure to fake news and conspiracy stories , cultural attitudes toward government mandates, and the spread of misinformation through social media were noted . The findings presented many negative effects of misinformation on public health . One primary consequence was the impact on health care professionals, who faced challenges in discerning accurate information, leading to disruptions in routine decision-making and care practices . The public was also affected, with misdirected responses and increased reliance on unproven remedies, indicating missed opportunities for effective epidemic control . Misinformation significantly disrupted health and risk communication, contributing to social unrest and heightened anxiety . It also directly impacted public health measures, as evidenced by lower intent to accept COVID-19 vaccines , reduced adherence to official health guidelines , and noncompliance with basic preventive measures such as handwashing . The spread of misinformation resulted in decreased public trust in science , undermining the effectiveness of public health messaging and leading to increased vaccine hesitancy . This hesitancy was further exacerbated by the promotion of antivaccine propaganda, posing a barrier to achieving herd immunity . The extent of the impact of misinformation was also evident in the public’s mental health, with reports of increased anxiety, suicidal thoughts, and distress , as well as in overall public attitudes toward the pandemic and changes in public attitudes toward vaccines, which became increasingly negative over time . The studies highlighted the challenges that individuals and communities faced in navigating the pandemic amid a flood of misinformation . It was reported that misinformation significantly impacted health care professionals, leading to discomfort, distraction, and difficulty in discerning accurate information. This impact affected decision-making and routine practices . The public’s response was manifested by changes in search behaviors and purchasing patterns, reflecting the influence of rumors and celebrity endorsements . It was reported that “fake news” significantly affected the information landscape, skewing the perception of truth versus lies . Hesitancy was reported in intent to receive COVID-19 vaccines across demographic groups . The misinformation also altered health behaviors, such as handwashing and the use of disinfectants, and influenced preventive behavioral intentions . It was also reported that misinformation affected public adherence to COVID-19 prevention, risk avoidance behaviors, and vaccination intentions . The communication strategies during quarantine, public trust and engagement with authorities, and compliance with quarantine measures were influenced by the level of concern, which was shaped by misinformation . It was reported that misinformation led to changes in social distancing and mask wearing . Social media platforms exhibited a prevalence of antivaccine content and a focus on misinformation in web-based discussions . The studies also reported that emotional and linguistic features in vaccine-related posts influenced public attitudes toward vaccines, reflecting the impact of different information sources . Anxiety levels were heightened due to exposure to misinformation, especially among specific demographic groups . Some of the studies (2/21, 10%) found that misinformation affected public trust in health experts and government and altered the perceived severity of COVID-19 . The studies identified a wide array of factors that contributed to the spread of misinformation during the pandemic . Key among these were social media and connections with family and friends, which hastened the spread of unregulated information . The issue was further compounded by delayed and nontransparent communication from health authorities, coupled with the absence of early, authoritative responses . Cognitive biases, a lack of digital and health literacy, and the exploitation of social divisions also played significant roles . Factors such as sociodemographic characteristics, trust in information sources, the frequency of social media use, and the nature of misinformation were important . The spread of misinformation was also influenced by gender, education level, and the distinction between urban and rural living , as well as age, the effectiveness of media channels, the initial understanding of SARS-CoV-2, and trust in authorities, particularly in relation to quarantine measures . Contributing factors included beliefs in conspiracy theories, cognitive intuition, an overestimation of COVID-19 knowledge, and susceptibility to cognitive biases , alongside political orientation and religious commitment . Public behavior was also shaped by concerns about government infringement on personal freedoms . Finally, exposure to fake news and conspiracy stories , cultural attitudes toward government mandates, and the spread of misinformation through social media were noted . Intervention Strategies The studies highlighted the critical role of effective public health communication strategies in addressing COVID-19 misinformation . This included a range of approaches such as enhancing health literacy and reinforcing social media policies against fake news , along with using fact checking and empathetic communication to debunk misinformation . The importance of timely and accurate information dissemination, particularly through social media, was also noted as a crucial component for authoritative communication . In addition, several studies advocated for tailored communication approaches. These approaches involve targeting specific misinformed subgroups , using infographics to clarify scientific processes , and focusing on community protection while reframing reckless behaviors . Essential strategies included training health care professionals to accurately identify credible information, alongside implementing media literacy campaigns and prioritizing groups considered vulnerable in public communication . Engaging skeptics, particularly vaccine skeptics, through interventions was reported as essential , with an emphasis on debunking misinformation, promoting credible information sources, and reducing exposure to misinformation . Intervention Methods The included studies reported various intervention methods to combat misinformation. Key strategies included the use of credible sources , the implementation of targeted campaigns, and the integration of digital technologies such as social media tools and algorithmic analyses . Educational efforts, ranging from basic loudspeaker announcements to sophisticated web-based educational tools and infographics, were also reported to be effective . The importance of engaging the public through surveys, randomized interventions, and peer discussions was noted . Fact checking, in partnership with third-party organizations and through internal processes, was highlighted as crucial, along with the need for empathetic communication . Finally, some of the studies (2/21, 10%) showed the importance of identifying predictors and using analytical models to refine strategies and better understand public sentiment . Platform or Channel for Communication The studies reported that a diverse array of platforms and channels played a crucial role in effective communication during the COVID-19 pandemic . Digital and social media platforms, such as Facebook, Reddit, and YouTube, were extensively used to disseminate facts and counter misinformation, as noted by numerous studies (8/21, 38%) . Government websites and official channels, alongside health care settings, were also acknowledged for their value in providing reliable and accurate information . Traditional media forms, including television, radio, and print, were found to be crucial in reaching wide audiences . Web-based platforms designed for research and surveys, such as Prolific, played a key role in gauging public perceptions and addressing misinformation . Furthermore, community networks and personal communications were identified as essential, particularly in village health volunteer networks and through engagement with health professionals and academics, demonstrating remarkable effectiveness in local communities and areas with limited digital access . Effectiveness Metrics and Reported Effectiveness In studies on public health communication during the pandemic, effectiveness metrics focused on reducing misinformation and improving health behaviors . Detailed engagement metrics included tracking interactions with verified versus fake news, changes in vaccination intent, and shifts in public attitudes toward vaccines over time . Unique metrics such as internet search trends correlating with public behavior, adherence to health guidelines, and the impact of misinformation on mental health were also explored . Studies such as that by Gruzd et al analyzed social media for misinformation removal and provaccine content. The reported effectiveness of interventions such as fact checking and clear communication varied across the studies, influencing vaccine attitudes and trust in science to varying degrees . Some of the studies (8/21, 38%) pointed to increased public support for measures such as quarantine, emphasizing the role of community engagement , but also noted challenges in maintaining long-term effectiveness and addressing various reactions such as anxiety in response to misinformation . These studies, often based on computational analyses, existing literature, and theoretical models, highlighted the complex, multifaceted nature of public health communication during the pandemic . The studies highlighted the critical role of effective public health communication strategies in addressing COVID-19 misinformation . This included a range of approaches such as enhancing health literacy and reinforcing social media policies against fake news , along with using fact checking and empathetic communication to debunk misinformation . The importance of timely and accurate information dissemination, particularly through social media, was also noted as a crucial component for authoritative communication . In addition, several studies advocated for tailored communication approaches. These approaches involve targeting specific misinformed subgroups , using infographics to clarify scientific processes , and focusing on community protection while reframing reckless behaviors . Essential strategies included training health care professionals to accurately identify credible information, alongside implementing media literacy campaigns and prioritizing groups considered vulnerable in public communication . Engaging skeptics, particularly vaccine skeptics, through interventions was reported as essential , with an emphasis on debunking misinformation, promoting credible information sources, and reducing exposure to misinformation . The included studies reported various intervention methods to combat misinformation. Key strategies included the use of credible sources , the implementation of targeted campaigns, and the integration of digital technologies such as social media tools and algorithmic analyses . Educational efforts, ranging from basic loudspeaker announcements to sophisticated web-based educational tools and infographics, were also reported to be effective . The importance of engaging the public through surveys, randomized interventions, and peer discussions was noted . Fact checking, in partnership with third-party organizations and through internal processes, was highlighted as crucial, along with the need for empathetic communication . Finally, some of the studies (2/21, 10%) showed the importance of identifying predictors and using analytical models to refine strategies and better understand public sentiment . The studies reported that a diverse array of platforms and channels played a crucial role in effective communication during the COVID-19 pandemic . Digital and social media platforms, such as Facebook, Reddit, and YouTube, were extensively used to disseminate facts and counter misinformation, as noted by numerous studies (8/21, 38%) . Government websites and official channels, alongside health care settings, were also acknowledged for their value in providing reliable and accurate information . Traditional media forms, including television, radio, and print, were found to be crucial in reaching wide audiences . Web-based platforms designed for research and surveys, such as Prolific, played a key role in gauging public perceptions and addressing misinformation . Furthermore, community networks and personal communications were identified as essential, particularly in village health volunteer networks and through engagement with health professionals and academics, demonstrating remarkable effectiveness in local communities and areas with limited digital access . In studies on public health communication during the pandemic, effectiveness metrics focused on reducing misinformation and improving health behaviors . Detailed engagement metrics included tracking interactions with verified versus fake news, changes in vaccination intent, and shifts in public attitudes toward vaccines over time . Unique metrics such as internet search trends correlating with public behavior, adherence to health guidelines, and the impact of misinformation on mental health were also explored . Studies such as that by Gruzd et al analyzed social media for misinformation removal and provaccine content. The reported effectiveness of interventions such as fact checking and clear communication varied across the studies, influencing vaccine attitudes and trust in science to varying degrees . Some of the studies (8/21, 38%) pointed to increased public support for measures such as quarantine, emphasizing the role of community engagement , but also noted challenges in maintaining long-term effectiveness and addressing various reactions such as anxiety in response to misinformation . These studies, often based on computational analyses, existing literature, and theoretical models, highlighted the complex, multifaceted nature of public health communication during the pandemic . Recommendations for Addressing COVID-19 Misinformation The included studies recommended a comprehensive approach that included strategic public health communication, educational initiatives, and policy adaptation . Key themes included effective information regulation and enhancing discernment skills among health care professionals as well as the general public , while strategies included considering platform-specific and demographic-focused approaches to combat misinformation . Governmental leadership and international coordination were considered crucial , and educational strategies were recommended to focus on improving health literacy and researching misinformation inoculation . Public health messaging and web-based moderation policies were deemed effective , and technological interventions and comprehensive policy making were recommended . Methodological research to understand extended debates and debunking techniques was emphasized , as well as tailored communication and messaging strategies . Identified Gaps in Addressing Misinformation The studies highlighted several gaps in managing COVID-19 misinformation and public health communication. Challenges included distinguishing authentic information from misinformation, the persistence of fake news, and the presence of echo chambers in social media networks . Timely, actionable advice for personal protection and effective risk communication during the early stages of the pandemic was lacking . Research limitations included a lack of real-world simulation, leading to challenges in generalizability . There was insufficient understanding of the role of health authorities as trusted sources, media preference during crises, and the effectiveness of information dissemination in different regions . Challenges arising from legal and ethical considerations, resource limitations, disparities in education access, and insufficient exploration of the relationship between misinformation and vaccine acceptance were also noted . Proposed Future Research and Actions Future research directions included developing guidelines for medical information dissemination, enhancing crisis communication skills among health care professionals, and creating targeted interventions based on demographics . Evaluating the impact of governmental and international organization communications, conducting research within social media settings, and analyzing the impact of misinformation more accurately were recommended . Studying media habits during crises, examining long-term behavioral changes after quarantine, and dissecting the influential aspects of messages were suggested . Investigating psychological factors, evaluating emotional appeals in health communication, and developing strategies for credible sources to enhance their social media influence were proposed . Ethically and legally compliant technological interventions, efficient resource allocation policies, and extensive studies on psychological impacts were recommended . Mourali and Drake proposed quantifying extended debates, studying message elements and sources, and exploring “prebunking.” Longitudinal studies, research on user engagement with social media content, and interventions to mitigate misinformation effects were highlighted . Finally, the studies suggested a holistic approach involving collaboration among companies, governments, and users; continuous monitoring of misinformation trends; regular fact checking; legal actions against sources of misinformation; and specific communications to debunk myths . The included studies recommended a comprehensive approach that included strategic public health communication, educational initiatives, and policy adaptation . Key themes included effective information regulation and enhancing discernment skills among health care professionals as well as the general public , while strategies included considering platform-specific and demographic-focused approaches to combat misinformation . Governmental leadership and international coordination were considered crucial , and educational strategies were recommended to focus on improving health literacy and researching misinformation inoculation . Public health messaging and web-based moderation policies were deemed effective , and technological interventions and comprehensive policy making were recommended . Methodological research to understand extended debates and debunking techniques was emphasized , as well as tailored communication and messaging strategies . The studies highlighted several gaps in managing COVID-19 misinformation and public health communication. Challenges included distinguishing authentic information from misinformation, the persistence of fake news, and the presence of echo chambers in social media networks . Timely, actionable advice for personal protection and effective risk communication during the early stages of the pandemic was lacking . Research limitations included a lack of real-world simulation, leading to challenges in generalizability . There was insufficient understanding of the role of health authorities as trusted sources, media preference during crises, and the effectiveness of information dissemination in different regions . Challenges arising from legal and ethical considerations, resource limitations, disparities in education access, and insufficient exploration of the relationship between misinformation and vaccine acceptance were also noted . Future research directions included developing guidelines for medical information dissemination, enhancing crisis communication skills among health care professionals, and creating targeted interventions based on demographics . Evaluating the impact of governmental and international organization communications, conducting research within social media settings, and analyzing the impact of misinformation more accurately were recommended . Studying media habits during crises, examining long-term behavioral changes after quarantine, and dissecting the influential aspects of messages were suggested . Investigating psychological factors, evaluating emotional appeals in health communication, and developing strategies for credible sources to enhance their social media influence were proposed . Ethically and legally compliant technological interventions, efficient resource allocation policies, and extensive studies on psychological impacts were recommended . Mourali and Drake proposed quantifying extended debates, studying message elements and sources, and exploring “prebunking.” Longitudinal studies, research on user engagement with social media content, and interventions to mitigate misinformation effects were highlighted . Finally, the studies suggested a holistic approach involving collaboration among companies, governments, and users; continuous monitoring of misinformation trends; regular fact checking; legal actions against sources of misinformation; and specific communications to debunk myths . Principal Findings Our study underscores the profound influence of misinformation during the COVID-19 pandemic, particularly in shaping public responses. Misinformation, primarily propagated through social media, led to widespread misconceptions about the severity of COVID-19 infection, triggering public confusion, reluctance to adhere to health guidelines, and increased vaccine hesitancy. This phenomenon significantly impacted vaccine uptake rates. Gallotti et al highlighted the simultaneous emergence of infodemics alongside pandemics, underlining the critical role of both human and automated (bots) accounts in spreading information of questionable quality on platforms such as Twitter. The authors introduced an Infodemic Risk Index to measure the exposure to unreliable news, showing that the early stages of the COVID-19 pandemic saw a significant spread of misinformation, which only subsided in favor of reliable sources as the infection rates increased . This emphasizes the complex challenge of managing infodemics in tandem with biological pandemics, necessitating adaptive public health communication strategies that are responsive to evolving information landscapes. Our findings resonate with historical observations in public health crises, evidenced by studies on the Zika virus outbreak , polio vaccination efforts in India and Nigeria , and the Middle East respiratory syndrome outbreak . Similar patterns of misinformation were also noted in the H1N1 pandemic and the Ebola outbreak. These instances highlight the critical need for clear, proactive communication strategies to effectively manage misinformation and guide public understanding and responses. The review also reveals a predominant focus on digital misinformation, underscoring the necessity to comprehend the impact of traditional media and word-of-mouth communication in spreading misinformation. While studies such as that by Basch et al have started to address this gap, there is a clear need for more extensive research, particularly on the long-term effects of misinformation on public health behaviors after a pandemic. This shift toward credible information, as observed by Gallotti et al , signals an opportunity for future research to explore capitalizing on changing information consumption patterns in public health messaging. Such observations are crucial for developing effective communication strategies, highlighting the necessity of integrating infodemic management with pandemic response efforts to mitigate misinformation effects and guide public behavior appropriately. The disparity in the effectiveness of misinformation mitigation strategies points to the need for a nuanced understanding of how misinformation evolves over time. Studies, such as that by Vijaykumar et al , highlight the challenges in countering rapidly changing misinformation narratives on digital platforms. Further investigation into the effectiveness of fact checking across different cultures and demographics, as suggested by Chou et al , is essential for developing better strategies to combat misinformation in diverse settings. This review found that various factors, including delayed communication from health authorities, cognitive biases, sociodemographic characteristics, trust in official sources, and political orientation, played a significant role in the spread of misinformation during the pandemic. These findings align with similar observations in other studies. Eysenbach emphasized the importance of trust in government agencies and health care providers in shaping individuals’ beliefs and their willingness to share accurate information during public health crises. In addition, Pennycook and Rand highlighted how political beliefs and affiliations can influence people’s interpretation of information, thus impacting their acceptance or rejection of official guidance during public health crises. The study by Gallotti et al also highlighted the differentiated roles of verified and unverified users on social media in propagating COVID-19–related information. Their analysis shows that verified users began to point more toward reliable sources over time, hinting at the potential of leveraging social media influencers and verified accounts in directing public attention to factual and scientifically verified information . These insights indicate the critical need for dynamic public health strategies that are adaptable and actionable, aimed at curtailing misinformation through education and technology. It is essential to incorporate digital literacy and clear, audience-specific messaging to effectively counter misinformation, a strategy that has proven successful in health crises beyond the COVID-19 pandemic; for example, during the H1N1 pandemic, targeting specific audience segments with tailored messages significantly improved public understanding and guideline compliance . Likewise, during the Ebola outbreak, proactive and transparent strategies were key in dispelling rumors and building trust in public health authorities . These approaches, based on an understanding of the target audience’s concerns and media habits, are consistent with our findings where digital literacy and targeted messaging played a critical role in mitigating COVID-19 misinformation effects. Such strategies are vital not only for immediate crisis response but also for fostering long-term resilience in public health communication, helping to enable the public to distinguish credible information from misinformation, with the ultimate goal of enhancing public health outcomes and trust in health authorities. In examining the authoritarian responses to the pandemic, particularly in Brazil and Turkey, it is evident that leadership tactics significantly contributed to societal polarization and misinformation. Leaders in these countries used the crisis to suppress dissent and consolidate power, often spreading misinformation and underreporting COVID-19 cases, thereby exacerbating public mistrust and confusion . Similarly, a study of communication strategies across countries with high rates of infection emphasized the variation in political leaders’ approaches, where strategies ranged from science-based communications to ideologically influenced messaging . The study highlighted the potential for political leaders to influence public health responses through their communication tactics, further impacting public behavior and trust in health guidelines . In certain situations, the integration of political ideology with public health messaging, as observed in countries such as the United States, Brazil, India, and the United Kingdom, not only perpetuated misinformation but also intensified societal rifts . This highlights the paramount role of leadership in navigating public health crises; for instance, in the United States and Brazil, political leaders’ approaches to the COVID-19 pandemic—characterized by mixed messaging on mask wearing and social distancing—contributed to public confusion and a politicized response to the pandemic. Similarly, the initial underestimation of the virus’s impact in India and the United Kingdom’s delayed lockdown response serve as examples of how political decisions can shape public health outcomes and trust in health authorities, emphasizing the profound impact of aligning political views with public health communication . In addition, the initial reluctance of the World Health Organization to endorse mask wearing, social distancing, and handwashing, followed by a later reversal of these recommendations, exemplifies the challenges and confusion created by global health leadership during the early stages of the pandemic . Such shifts in guidance contributed to the global spread of misinformation, further complicating public health responses and trust in international health authorities . These approaches, based on an understanding of the target audience’s concerns and media habits, are consistent with our findings that digital literacy and targeted messaging played a critical role in mitigating COVID-19 misinformation effects. Such strategies are vital not only for immediate crisis response but also for fostering long-term resilience in public health communication, helping to enable the public to distinguish credible information from misinformation, with the ultimate goal of enhancing public health outcomes and trust in health authorities. Applying the MEGA framework in practical settings could revolutionize public health communication, offering a model for how technology can be harnessed to tackle misinformation more effectively. By processing massive graph data sets and accurately computing infodemic risk scores, MEGA supports the development of targeted communication strategies and interventions. Its approach to preserving crucial feature information through graph neural networks signifies a leap forward in optimizing learning performance, underscoring the framework’s utility in crafting evidence-based policies and initiatives to effectively combat misinformation. This emphasizes the importance of integrating advanced technological solutions, such as MEGA, into public health strategies to enhance the precision and effectiveness of infodemic management . The integration of social media literacy into public health strategies is emphasized as essential by Ziapour et al , suggesting that a populace equipped with advanced media literacy skills exhibits greater resilience against misinformation. Our study reveals the profound impact of the COVID-19 infodemic, which extended beyond public health and eroded trust in health institutions and government authorities. This decline in trust contributed to societal polarization, mirroring the effects seen in the Ebola outbreak, where misinformation led to notable repercussions . Further research, similar to that conducted on the Zika outbreak by Basch et al , is needed to understand the long-term effects of misinformation on societal cohesion and trust. Addressing this evolving landscape of misinformation requires dynamic and adaptable public health policies. These strategies should integrate insights from various methodologies, using both digital and traditional media for greater reach and impact, drawing lessons from the successful strategies deployed during the H1N1 pandemic, such as those highlighted by Chou et al . Our study advocates for a collaborative approach, uniting governments, the private sector, and the public in a concerted effort to combat misinformation, highlighting the importance of joint action in this global challenge. This approach should include continuous monitoring of misinformation trends, implementing regular fact checking, taking legal action against sources of misinformation, and developing specific communications to debunk myths. Similar findings have been reported in studies addressing misinformation related to the Zika virus , yellow fever , and Ebola , emphasizing the importance of a holistic strategy involving all stakeholders . Limitations The review has several limitations to consider. First, there is a temporal limitation because it included only studies published between December 2019 and September 2023, potentially excluding more recent research that could have offered additional insights. Second, the reliance on specific databases (MEDLINE [PubMed], Embase, and Scopus) as the primary sources for data might have led to the omission of pertinent studies that are not indexed in these databases. Third, the study’s sole focus on research articles may have excluded valuable insights from other scholarly works such as conference papers, theses, case studies, and gray literature. Finally, it is important to acknowledge that the study’s restriction to English-language publications may have excluded valuable research conducted in other languages. While efforts were made to review the available literature comprehensively, omitting non-English sources could limit the breadth and depth of the findings. Recognizing these limitations, future endeavors should aim to expand the scope of research beyond these constraints, incorporating a more diverse range of sources, languages, and real-world interventions to enrich our understanding of, and response to, misinformation. Conclusions The results of this review emphasize the significant and complex challenges posed by misinformation during the COVID-19 pandemic. It shows how misinformation can have a wide impact on public health, societal behaviors, and individual mental well-being. The findings highlight the critical role of effective public health communication strategies in addressing the infodemic. It is essential that these strategies are not only targeted and precise but also adaptable and inclusive, ensuring that they are relevant to diverse demographic and sociocultural contexts. The review also emphasizes the need for ongoing collaborative research efforts to further explore the nuances of the misinformation spread and its consequences. This requires cooperation among health authorities, policy makers, communication specialists, and technology experts to develop evidence-based approaches and policies to combat misinformation. Furthermore, the review highlights the importance of refining public health communication strategies to keep up with the ever-changing nature of misinformation, especially in the digital realm. It advocates using advanced technology and data-driven insights to enhance the reach and impact of health communication. By combining scientific rigor, technological innovation, and empathetic communication, these strategies can contribute to building public trust, promoting health literacy, and creating resilient communities capable of recognizing and countering misinformation. In summary, the lessons learned from the COVID-19 pandemic emphasize the necessity of strengthening public health communication infrastructures. This strengthening is vital for addressing the current misinformation crisis and preparing for future public health emergencies. Implementing these recommendations will play a crucial role in shaping a more informed, aware, and health-literate global community better equipped to confront the challenges posed by misinformation in our increasingly interconnected world. Furthermore, future research directions should explore integrating advanced large language models with frameworks similar to MEGA. This exploration will bolster automated fact checking and infodemic risk management, contributing to more effective strategies in combating misinformation in public health communication. Our study underscores the profound influence of misinformation during the COVID-19 pandemic, particularly in shaping public responses. Misinformation, primarily propagated through social media, led to widespread misconceptions about the severity of COVID-19 infection, triggering public confusion, reluctance to adhere to health guidelines, and increased vaccine hesitancy. This phenomenon significantly impacted vaccine uptake rates. Gallotti et al highlighted the simultaneous emergence of infodemics alongside pandemics, underlining the critical role of both human and automated (bots) accounts in spreading information of questionable quality on platforms such as Twitter. The authors introduced an Infodemic Risk Index to measure the exposure to unreliable news, showing that the early stages of the COVID-19 pandemic saw a significant spread of misinformation, which only subsided in favor of reliable sources as the infection rates increased . This emphasizes the complex challenge of managing infodemics in tandem with biological pandemics, necessitating adaptive public health communication strategies that are responsive to evolving information landscapes. Our findings resonate with historical observations in public health crises, evidenced by studies on the Zika virus outbreak , polio vaccination efforts in India and Nigeria , and the Middle East respiratory syndrome outbreak . Similar patterns of misinformation were also noted in the H1N1 pandemic and the Ebola outbreak. These instances highlight the critical need for clear, proactive communication strategies to effectively manage misinformation and guide public understanding and responses. The review also reveals a predominant focus on digital misinformation, underscoring the necessity to comprehend the impact of traditional media and word-of-mouth communication in spreading misinformation. While studies such as that by Basch et al have started to address this gap, there is a clear need for more extensive research, particularly on the long-term effects of misinformation on public health behaviors after a pandemic. This shift toward credible information, as observed by Gallotti et al , signals an opportunity for future research to explore capitalizing on changing information consumption patterns in public health messaging. Such observations are crucial for developing effective communication strategies, highlighting the necessity of integrating infodemic management with pandemic response efforts to mitigate misinformation effects and guide public behavior appropriately. The disparity in the effectiveness of misinformation mitigation strategies points to the need for a nuanced understanding of how misinformation evolves over time. Studies, such as that by Vijaykumar et al , highlight the challenges in countering rapidly changing misinformation narratives on digital platforms. Further investigation into the effectiveness of fact checking across different cultures and demographics, as suggested by Chou et al , is essential for developing better strategies to combat misinformation in diverse settings. This review found that various factors, including delayed communication from health authorities, cognitive biases, sociodemographic characteristics, trust in official sources, and political orientation, played a significant role in the spread of misinformation during the pandemic. These findings align with similar observations in other studies. Eysenbach emphasized the importance of trust in government agencies and health care providers in shaping individuals’ beliefs and their willingness to share accurate information during public health crises. In addition, Pennycook and Rand highlighted how political beliefs and affiliations can influence people’s interpretation of information, thus impacting their acceptance or rejection of official guidance during public health crises. The study by Gallotti et al also highlighted the differentiated roles of verified and unverified users on social media in propagating COVID-19–related information. Their analysis shows that verified users began to point more toward reliable sources over time, hinting at the potential of leveraging social media influencers and verified accounts in directing public attention to factual and scientifically verified information . These insights indicate the critical need for dynamic public health strategies that are adaptable and actionable, aimed at curtailing misinformation through education and technology. It is essential to incorporate digital literacy and clear, audience-specific messaging to effectively counter misinformation, a strategy that has proven successful in health crises beyond the COVID-19 pandemic; for example, during the H1N1 pandemic, targeting specific audience segments with tailored messages significantly improved public understanding and guideline compliance . Likewise, during the Ebola outbreak, proactive and transparent strategies were key in dispelling rumors and building trust in public health authorities . These approaches, based on an understanding of the target audience’s concerns and media habits, are consistent with our findings where digital literacy and targeted messaging played a critical role in mitigating COVID-19 misinformation effects. Such strategies are vital not only for immediate crisis response but also for fostering long-term resilience in public health communication, helping to enable the public to distinguish credible information from misinformation, with the ultimate goal of enhancing public health outcomes and trust in health authorities. In examining the authoritarian responses to the pandemic, particularly in Brazil and Turkey, it is evident that leadership tactics significantly contributed to societal polarization and misinformation. Leaders in these countries used the crisis to suppress dissent and consolidate power, often spreading misinformation and underreporting COVID-19 cases, thereby exacerbating public mistrust and confusion . Similarly, a study of communication strategies across countries with high rates of infection emphasized the variation in political leaders’ approaches, where strategies ranged from science-based communications to ideologically influenced messaging . The study highlighted the potential for political leaders to influence public health responses through their communication tactics, further impacting public behavior and trust in health guidelines . In certain situations, the integration of political ideology with public health messaging, as observed in countries such as the United States, Brazil, India, and the United Kingdom, not only perpetuated misinformation but also intensified societal rifts . This highlights the paramount role of leadership in navigating public health crises; for instance, in the United States and Brazil, political leaders’ approaches to the COVID-19 pandemic—characterized by mixed messaging on mask wearing and social distancing—contributed to public confusion and a politicized response to the pandemic. Similarly, the initial underestimation of the virus’s impact in India and the United Kingdom’s delayed lockdown response serve as examples of how political decisions can shape public health outcomes and trust in health authorities, emphasizing the profound impact of aligning political views with public health communication . In addition, the initial reluctance of the World Health Organization to endorse mask wearing, social distancing, and handwashing, followed by a later reversal of these recommendations, exemplifies the challenges and confusion created by global health leadership during the early stages of the pandemic . Such shifts in guidance contributed to the global spread of misinformation, further complicating public health responses and trust in international health authorities . These approaches, based on an understanding of the target audience’s concerns and media habits, are consistent with our findings that digital literacy and targeted messaging played a critical role in mitigating COVID-19 misinformation effects. Such strategies are vital not only for immediate crisis response but also for fostering long-term resilience in public health communication, helping to enable the public to distinguish credible information from misinformation, with the ultimate goal of enhancing public health outcomes and trust in health authorities. Applying the MEGA framework in practical settings could revolutionize public health communication, offering a model for how technology can be harnessed to tackle misinformation more effectively. By processing massive graph data sets and accurately computing infodemic risk scores, MEGA supports the development of targeted communication strategies and interventions. Its approach to preserving crucial feature information through graph neural networks signifies a leap forward in optimizing learning performance, underscoring the framework’s utility in crafting evidence-based policies and initiatives to effectively combat misinformation. This emphasizes the importance of integrating advanced technological solutions, such as MEGA, into public health strategies to enhance the precision and effectiveness of infodemic management . The integration of social media literacy into public health strategies is emphasized as essential by Ziapour et al , suggesting that a populace equipped with advanced media literacy skills exhibits greater resilience against misinformation. Our study reveals the profound impact of the COVID-19 infodemic, which extended beyond public health and eroded trust in health institutions and government authorities. This decline in trust contributed to societal polarization, mirroring the effects seen in the Ebola outbreak, where misinformation led to notable repercussions . Further research, similar to that conducted on the Zika outbreak by Basch et al , is needed to understand the long-term effects of misinformation on societal cohesion and trust. Addressing this evolving landscape of misinformation requires dynamic and adaptable public health policies. These strategies should integrate insights from various methodologies, using both digital and traditional media for greater reach and impact, drawing lessons from the successful strategies deployed during the H1N1 pandemic, such as those highlighted by Chou et al . Our study advocates for a collaborative approach, uniting governments, the private sector, and the public in a concerted effort to combat misinformation, highlighting the importance of joint action in this global challenge. This approach should include continuous monitoring of misinformation trends, implementing regular fact checking, taking legal action against sources of misinformation, and developing specific communications to debunk myths. Similar findings have been reported in studies addressing misinformation related to the Zika virus , yellow fever , and Ebola , emphasizing the importance of a holistic strategy involving all stakeholders . The review has several limitations to consider. First, there is a temporal limitation because it included only studies published between December 2019 and September 2023, potentially excluding more recent research that could have offered additional insights. Second, the reliance on specific databases (MEDLINE [PubMed], Embase, and Scopus) as the primary sources for data might have led to the omission of pertinent studies that are not indexed in these databases. Third, the study’s sole focus on research articles may have excluded valuable insights from other scholarly works such as conference papers, theses, case studies, and gray literature. Finally, it is important to acknowledge that the study’s restriction to English-language publications may have excluded valuable research conducted in other languages. While efforts were made to review the available literature comprehensively, omitting non-English sources could limit the breadth and depth of the findings. Recognizing these limitations, future endeavors should aim to expand the scope of research beyond these constraints, incorporating a more diverse range of sources, languages, and real-world interventions to enrich our understanding of, and response to, misinformation. The results of this review emphasize the significant and complex challenges posed by misinformation during the COVID-19 pandemic. It shows how misinformation can have a wide impact on public health, societal behaviors, and individual mental well-being. The findings highlight the critical role of effective public health communication strategies in addressing the infodemic. It is essential that these strategies are not only targeted and precise but also adaptable and inclusive, ensuring that they are relevant to diverse demographic and sociocultural contexts. The review also emphasizes the need for ongoing collaborative research efforts to further explore the nuances of the misinformation spread and its consequences. This requires cooperation among health authorities, policy makers, communication specialists, and technology experts to develop evidence-based approaches and policies to combat misinformation. Furthermore, the review highlights the importance of refining public health communication strategies to keep up with the ever-changing nature of misinformation, especially in the digital realm. It advocates using advanced technology and data-driven insights to enhance the reach and impact of health communication. By combining scientific rigor, technological innovation, and empathetic communication, these strategies can contribute to building public trust, promoting health literacy, and creating resilient communities capable of recognizing and countering misinformation. In summary, the lessons learned from the COVID-19 pandemic emphasize the necessity of strengthening public health communication infrastructures. This strengthening is vital for addressing the current misinformation crisis and preparing for future public health emergencies. Implementing these recommendations will play a crucial role in shaping a more informed, aware, and health-literate global community better equipped to confront the challenges posed by misinformation in our increasingly interconnected world. Furthermore, future research directions should explore integrating advanced large language models with frameworks similar to MEGA. This exploration will bolster automated fact checking and infodemic risk management, contributing to more effective strategies in combating misinformation in public health communication. |
Redox cycling of sulfur via microbes in soil boosts the bioavailability of nutrients to | d762c093-0022-439b-83e8-cdf2700da70b | 11849842 | Microbiology[mh] | Sulfur is among the essential nutrients required for the proper growth of plants, animals, humans, and microorganisms . Being a key component of amino acids S is vital for the formation of amino acids like methionine and cysteine, which are fundamental for plant growth and the synthesis of proteins and vitamins . Sulfur plays a key role in forming vitamins, proteins, and oils improving plants compounds. A deficiency in S impairs photosynthetic activity, disrupts nitrogen (N) metabolism, reduces oil content, and hampers overall plant growth, with pronounced effects on both shoot and root development . Its deficiency causes a decrease in S-containing amino acids and protein synthesis . Additionally, its deficiency causes the yellowing of younger leaves known as chlorosis followed by necrosis in later developmental stages. Lower S availability also affects N fixation because both N and S are central parts of the protein . Oilseed crops, viz., soybean, groundnut, rapeseed, and sunflower are required more S, followed by cereals and pulses . Therefore, to increase S availability, S-oxidizers are used to improve the natural oxidation rate and enhance the production of SO 4 −2 , making it available to crop plants at critical stages . The application of mineral fertilizer having S has also been reported to improve nutrient availability, by improving soil physicochemical properties . Application of S increased the synthesis of amino acids and also enhanced amounts of N 2 fixed in leguminous plants and soil . The availability of S in the soil is affected by physiochemical factors and pedogenic processes. Most soils across the globe are S-deficient . Sulfur deficiency is more common compared to other nutrients in soils of Northern Europe with oilseed cropping . Additionally, S deficiency could also be attributed to decreased S storage via deposition from the atmosphere in the last two to three decades. Furthermore, S-containing fertilizers like single superphosphate, farmyard manure, and compost have been substituted by chemical fertilizers having no or little S . Beneficial microorganisms from the rhizosphere and phyllo-sphere were isolated and evaluated as plant growth promotors to reduce agrochemicals application in soil . Mechanisms directly involved in plant growth promotion relate to higher nutrient acquisition through fixation, solubilization in soil (N fixation, P, K, and S solubilization), hormone production (auxins, cytokinin, aminolevulinic acid, gibberellins, and abscisic acid) and iron sequestration through bacterial siderophores, and ACC deaminase synthesis to reduce formation of ethylene . Whereas, indirect growth stimulation mechanisms include, the reduction of stresses viz., salinity, drought, heavy metals, and temperature . Microbes possessing plant growth-promoting potential have been commercialized as biofertilizers, viz., N-fixing, P-solubilizing, K-mobilizers, PGPR, and mycorrhizal fungi , and inoculation of such microbes alter microbial diversity and change rooting patterns that help in nutrients management . Microbes involved in S cycling could be used as bio-fertilizers, having low-input and environment-friendly technology for sustainable agriculture ecosystems . Plants absorb S in the form of inorganic SO 4 2− and some microbes can oxidize S into SO 4 − 2 form known as SOB . Bacteria belonging to genus Thiobacillus and Acidithiobacillus, are involved in S-oxidation (Shinde et al., 2022). Several studies have been undertaken using SOB as a microbial inoculant and results showed around 47–69% of onion yield increase compared to control . Moreover, SOB inoculation and N-fixing bacteria combinedly improved plant yield and N uptake (220%, and 630%, respectively) compared to non-inoculated plants. Similarly, bio-fertilization has N-fixing strains ( Azotobacter and Azospirillum ), in addition to the inorganic N-enhanced oil content and grain yield of canola . SOB enhanced S-oxidation resulting in higher availability of SO 4 − 2 to mustard crop . Sulfur bacteria integrate a diverse group of organisms having the capability to share oxidized, reduced, or partially oxidized inorganic S compounds. Genus Thiobacillus is the most important organism in different groups of bacteria (SOB) which are responsible for S oxidization. Application of Thiobacillus bacteria enhances S and P availability in soil. Canola is the most essential oil seed crop across the globe . P-solubilizing and SOB enhance canola efficiency in calcareous soils by improving the absorption of plant nutrients. Bacterial inoculant operations can be improved by the addition of organic matter (OM). Therefore, the correct combination of chemical and biological sources can considerably boost canola production and development by improving nutrient absorption . Keeping in view the S importance as a key macronutrient, further studies are needed on soil microorganisms involved in their biogeocycle. The current study’s novelty lies in a comprehensive evaluation of the effects of SOB and SRB in combination with NPK fertilizer’s recommended dose synergistically. This approach augments our understanding of how interactions of microbes enhance nutrient bioavailability in soil which offers a novel approach to optimize the management of nutrients in crop cultivation. Current study objective was to evaluate the impact of the recommended dose of NPK with or without SOB and the combined effects of SOB and SRB along with the recommended dose of NPK. It was inferred that their integrated application synergistically affects macro- and micronutrient bioavailability and uptake by canola in soil. 2.1. Soil characterization Ten composite samples of soil (0–30 cm) were collected at the start of the experiment to analyze soil pH, EC , OM , and texture . The texture of experimental soil was recorded as silty clay loam, having pH = 7.53, EC = 0.252 dSm − 1 , N (0.54 g kg − 1 ), P (6.91 mg kg − 1 ), K (131 mg kg − 1 ), S (6.94 mg kg − 1 ) zinc (0.31 mg kg − 1 ), manganese (4.01 mg kg − 1 ), and iron (4.2 mg kg − 1 ). 2.2. Isolation of SOB and SRB SOB isolation was carried out with a thiosulphate broth medium. Its composition was, viz., Na 2 S 2 O 3 , NaHCO 3 , 0.2 g; 5.0 g; 0.1 g; NH 4 Cl, K 2 HPO 4 , 0.1 g dissolved in distilled (DI) water (DI) (1.0 L). Medium pH was adjusted to 8.0 and Bromocresol purple was used as indicator. The medium was autoclaved for sterilization and subsequently poured into pre-sterilized tubes and upon condensation, the streaking was done to purify the isolated strains. The tubes were incubated for 4–5 days at 30 °C . Enrichment and isolation of SRB were done by using a medium containing DI water per liter: ammonium sulfate 5.3 g, sodium-acetate 2.0 g, KH 2 PO 4 0.5 g, magnesium sulfate.7H 2 O 0.2 g, sodium chloride 1.0 g, calcium chloride. 2H 2 O 0.1 g. Solution: 1 10.0 mL and Solution: 2 1.0 mL. Solution one having per liter of DI water: Nitrilotriacetic acid 12.8 g, FeCI 2 .4H 2 O 300.0 mg, copper chloride 20.0 mg, MnCI 2 .4H 2 O 100.0 mg, COCI 2 .6H 2 O 170 mg, zinc chloride 100 mg, H 3 BO 3 10.0 mg, Na 2 MoO 4 .2H 2 O 10 mg. Solution two has DI water 100 mL Resazurin 0.2 g. Followed by autoclave and cooling, the medium was provided with anaerobic sterile stock solutions from components: 50 mL of 8% Na 2 CO 3 water. 5.5 mL of 25% hydrochloric acid. About 1.0 mL of 8.7% Na 2 S 2 O 4 in water and, pH was adjusted to 7.2 by adding HCI . 2.3. Purification of SOB and SRB Isolate purification was carried out by moving the isolates to the medium of the new broth. The streaking of isolates was carried out on thiosulfate (S 2 O 3 2− ) agar plates to obtain individual colonies. For characterization and further testing, these pure isolates have been maintained . 2.4. Characterization of SOB and SRB Isolated strain characterization was done through colony morphology, elevation pattern, colony margins, colony colour, colony form, and opacity. Additionally, biochemical and morphological characteristics were studied to characterize isolated bacterial strains by following . 2.5. Gram staining Bacterial strains were further subjected to gram staining as explained by . Wire loop was first heated on a spirit lamp then it was full of individual isolated bacterial strains that were spread on a glass slide followed by air-drying and stained by using crystal violet for two minutes followed by slight washing with deionized water. Later on, the smear was flooded with iodine solution and de-colorized by using 75% alcohol. After de-colorization, the smear was stained with safranin. The smear was dried by passing a glass slide 2 to 3 times from spirit lamb and the slide was placed under a light microscope for observation of the staining reaction of individual isolates. 2.6. Treatment plan and experimental design A greenhouse experiment was undertaken at PMAS-Arid Agriculture University Rawalpindi, to assess the potential impact of inoculation of SOB and SRB, and synthetic fertilizer on canola production, soil nutrients (macro- and micronutrient) bioavailability, and plant uptake. Soil was collected from the university research area and pots were filled (8 kg each) with air-dried and sieved soil (2 mm). The treatment combination was control, NPK half dose (½ NPK) (50–30–25 mg kg − 1 ), full dose of NPK (100–60–50 mg kg − 1 ), ½ NPK + SOB, ½ NPK + SRB, and ½ NPK + SOB + SRB. A completely randomized design (CRD) was implemented with three replications. Treatments encompassing bacteria and synthetic fertilizer were added to the soil before the sowing of the canola crop. Synthetic fertilizer (NPK) was applied as a basic dose before sowing as DAP, urea, and K 2 SO 4 , in all pots. Sterilized DI water was used to dilute bacterial inoculums at the rate of 1% v/v. Ten seeds of Brassica napus per pot were sown in November 2021. After seedlings establishment, thinning of plants was done (five plants −1 ). Soil moisture (70–80%) was maintained and weeds were manually removed wherever required. 2.7. Crop harvesting Harvesting of canola was done after 145 days of sowing. Different attributes, viz., shoot length, fresh shoot - and dry weight, and root fresh- and dry weight were recorded. The shoot and root dry weight of each plant was determined by separating roots from shoots with DI water and stored in an oven at 65 ± 3 °C for 3 days. 2.8. Analysis of soil To evaluate the impact of bacterial inoculates and synthetic fertilizer on S and other elements, post-harvest soil was collected and analyzed for total-N , AB-DTPA available-P, and extractable-K . AB-DTPA extractable-Mn, Zn, and Fe were recorded by following . Soil texture was measured through the hydrometer method . Soil pH and EC were measured through soil pH and EC meter . Soil organic matter was analyzed by following . 2.9. Plant analysis Canola plant leaves were harvested at the maturity stage, and dried in the oven at 65 ± 3 °C for 3 days, and dry weight was noted. Dried shoots/leaves were ground to powder and dry ashing was done up to 4 h at 550 °C through a muffle furnace. Digestate was used to analyze Mn, Fe, and Zn at different wavelengths, 279.5 nm, 248.7 nm, and 213.7 nm respectively, and analyzed by using an atomic absorption spectrophotometer. Potassium contents were analyzed through a flame photometer. Phosphorous contents were measured by following . Kjeldahl method was used to measure total-N . 2.10. Statistical analysis A completely randomized design (CRD) was implemented with six treatments and replicated thrice. All data was analyzed by using Statistix 8.1. ANOVA and multiple comparison analyses were performed using the Tukeys test at P < 0.05. Means were compared by using the least significant difference test (LSD 0.05 ) for treatments’ statistical significance. Graph was drawn by using MS Excel, 2010. Ten composite samples of soil (0–30 cm) were collected at the start of the experiment to analyze soil pH, EC , OM , and texture . The texture of experimental soil was recorded as silty clay loam, having pH = 7.53, EC = 0.252 dSm − 1 , N (0.54 g kg − 1 ), P (6.91 mg kg − 1 ), K (131 mg kg − 1 ), S (6.94 mg kg − 1 ) zinc (0.31 mg kg − 1 ), manganese (4.01 mg kg − 1 ), and iron (4.2 mg kg − 1 ). SOB isolation was carried out with a thiosulphate broth medium. Its composition was, viz., Na 2 S 2 O 3 , NaHCO 3 , 0.2 g; 5.0 g; 0.1 g; NH 4 Cl, K 2 HPO 4 , 0.1 g dissolved in distilled (DI) water (DI) (1.0 L). Medium pH was adjusted to 8.0 and Bromocresol purple was used as indicator. The medium was autoclaved for sterilization and subsequently poured into pre-sterilized tubes and upon condensation, the streaking was done to purify the isolated strains. The tubes were incubated for 4–5 days at 30 °C . Enrichment and isolation of SRB were done by using a medium containing DI water per liter: ammonium sulfate 5.3 g, sodium-acetate 2.0 g, KH 2 PO 4 0.5 g, magnesium sulfate.7H 2 O 0.2 g, sodium chloride 1.0 g, calcium chloride. 2H 2 O 0.1 g. Solution: 1 10.0 mL and Solution: 2 1.0 mL. Solution one having per liter of DI water: Nitrilotriacetic acid 12.8 g, FeCI 2 .4H 2 O 300.0 mg, copper chloride 20.0 mg, MnCI 2 .4H 2 O 100.0 mg, COCI 2 .6H 2 O 170 mg, zinc chloride 100 mg, H 3 BO 3 10.0 mg, Na 2 MoO 4 .2H 2 O 10 mg. Solution two has DI water 100 mL Resazurin 0.2 g. Followed by autoclave and cooling, the medium was provided with anaerobic sterile stock solutions from components: 50 mL of 8% Na 2 CO 3 water. 5.5 mL of 25% hydrochloric acid. About 1.0 mL of 8.7% Na 2 S 2 O 4 in water and, pH was adjusted to 7.2 by adding HCI . Isolate purification was carried out by moving the isolates to the medium of the new broth. The streaking of isolates was carried out on thiosulfate (S 2 O 3 2− ) agar plates to obtain individual colonies. For characterization and further testing, these pure isolates have been maintained . Isolated strain characterization was done through colony morphology, elevation pattern, colony margins, colony colour, colony form, and opacity. Additionally, biochemical and morphological characteristics were studied to characterize isolated bacterial strains by following . Bacterial strains were further subjected to gram staining as explained by . Wire loop was first heated on a spirit lamp then it was full of individual isolated bacterial strains that were spread on a glass slide followed by air-drying and stained by using crystal violet for two minutes followed by slight washing with deionized water. Later on, the smear was flooded with iodine solution and de-colorized by using 75% alcohol. After de-colorization, the smear was stained with safranin. The smear was dried by passing a glass slide 2 to 3 times from spirit lamb and the slide was placed under a light microscope for observation of the staining reaction of individual isolates. A greenhouse experiment was undertaken at PMAS-Arid Agriculture University Rawalpindi, to assess the potential impact of inoculation of SOB and SRB, and synthetic fertilizer on canola production, soil nutrients (macro- and micronutrient) bioavailability, and plant uptake. Soil was collected from the university research area and pots were filled (8 kg each) with air-dried and sieved soil (2 mm). The treatment combination was control, NPK half dose (½ NPK) (50–30–25 mg kg − 1 ), full dose of NPK (100–60–50 mg kg − 1 ), ½ NPK + SOB, ½ NPK + SRB, and ½ NPK + SOB + SRB. A completely randomized design (CRD) was implemented with three replications. Treatments encompassing bacteria and synthetic fertilizer were added to the soil before the sowing of the canola crop. Synthetic fertilizer (NPK) was applied as a basic dose before sowing as DAP, urea, and K 2 SO 4 , in all pots. Sterilized DI water was used to dilute bacterial inoculums at the rate of 1% v/v. Ten seeds of Brassica napus per pot were sown in November 2021. After seedlings establishment, thinning of plants was done (five plants −1 ). Soil moisture (70–80%) was maintained and weeds were manually removed wherever required. Harvesting of canola was done after 145 days of sowing. Different attributes, viz., shoot length, fresh shoot - and dry weight, and root fresh- and dry weight were recorded. The shoot and root dry weight of each plant was determined by separating roots from shoots with DI water and stored in an oven at 65 ± 3 °C for 3 days. To evaluate the impact of bacterial inoculates and synthetic fertilizer on S and other elements, post-harvest soil was collected and analyzed for total-N , AB-DTPA available-P, and extractable-K . AB-DTPA extractable-Mn, Zn, and Fe were recorded by following . Soil texture was measured through the hydrometer method . Soil pH and EC were measured through soil pH and EC meter . Soil organic matter was analyzed by following . Canola plant leaves were harvested at the maturity stage, and dried in the oven at 65 ± 3 °C for 3 days, and dry weight was noted. Dried shoots/leaves were ground to powder and dry ashing was done up to 4 h at 550 °C through a muffle furnace. Digestate was used to analyze Mn, Fe, and Zn at different wavelengths, 279.5 nm, 248.7 nm, and 213.7 nm respectively, and analyzed by using an atomic absorption spectrophotometer. Potassium contents were analyzed through a flame photometer. Phosphorous contents were measured by following . Kjeldahl method was used to measure total-N . A completely randomized design (CRD) was implemented with six treatments and replicated thrice. All data was analyzed by using Statistix 8.1. ANOVA and multiple comparison analyses were performed using the Tukeys test at P < 0.05. Means were compared by using the least significant difference test (LSD 0.05 ) for treatments’ statistical significance. Graph was drawn by using MS Excel, 2010. 3.1. SOB and SRB isolate attributes Isolated S-oxidizing bacteria was Thiobacillus thiooxidans , a gram-negative chemo-lithotroph bacteria. They utilize S 2 O 3 2 − and sulfide as energy sources to produce sulphuric acid. These are aerobic sulfur bacteria and they derive energy from the oxidation of sulfide or elemental sulfur (S 0 ) to SO 4 − 2 . The isolated S-reducing bacteria was Desulfvibrio vulgaris, Gram-negative, non-spore-forming, anaerobic, curved rod-shaped PGPR. 3.2. Growth attributes of canola Plant attributes like root, shoot, and plant biomass of canola differed significantly with bacterial inoculation in soil. Results indicated that inoculation of SOB and SRB enhanced canola biomass compared with sole NPK application. The highest shoot length of 100 cm and the highest root length of 26.8 cm were noted in T6 (½ NPK+SOB+SRB) . The highest fresh root weight (g plant − 1 ) of 2.58 was recorded in T5 (½ NPK + SRB), while the highest fresh shoot weight (g plant − 1 ) of 28.6 was noted in T6 (½ NPK + SOB + SRB). The highest dry root weight (g plant − 1 ) of 2.03 was noted in T6 (½ NPK + SOB + SRB), while the highest dry shoot weight (g plant − 1 ) of 15.9 was obtained in T6 (½ NPK + SOB + SRB) . 3.3. Nutrient contents in canola The concentration of nutrients in plant tissue at maturity depicted a significant response to applied treatments. The highest total-N (1.53%), K (2.80%) concentration was recorded in T6 (½ NPK+SOB+SRB) while the highest P (1.49%) concentration was recorded in T2 (full NPK) as reported in . The highest total-S (0.21%) was recorded in T6 (½ NPK+SOB+SRB) followed by treatment 1/2 NPK+SOB (0.18%) while the lowest total-S (0.043%) was recorded for the control treatment . A blend of ½ NPK synthetic fertilizer with SOB and SRB (T6) improved Mn significantly (0.06 g kg − 1 ), Fe (0.023 g kg − 1 ), Zn (0.045 g kg − 1 ), and Cu (0.092 g kg − 1 ) contents in plants tissue as depicted in . Inoculation of SOB and SRB with half NPK, improved nutrient contents in canola crops significantly, which suggests these microorganisms’ role in nutrient mobilization. 3.4. Post‑harvest soil pH, EC, and OM contents Post-harvested soil amended through bacteria significantly affected pH and OM. The decreasing pattern was shown in pH with various applied treatments, ranging from 7.5 in the control treatment to 7.1 in T4 (1/2 NPK + SOB) closely followed by T6 (7.2) ½ NPK+SOB+SRB . However, OM and EC were increased slightly by different treatment applications. Soil OM content was recorded at 0.50% in control and 0.61% in treatment T6 (1/2 NPK+SOB+SRB). Soil EC was increased from 0.251 dS m − 1 in T1 (control) treatment to 0.492 dS m − 1 in T6 (1/2 NPK+SOB+SRB) . 3.5. Nutrient contents in post‑harvest soil Available-P concentration significantly improved over control (0.057 g kg − 1 ) with treatments applied up to the highest rate of 0.025 g kg − 1 at T6 (½NPK+SOB+SRB). Soil extractable-K and total-N also improved. Total-N varied between 0.56% to 1.53% showing that total-N was slightly short in post-harvest soil. Soil extractable-K varied from 1.31% to 1.56%. Soil S ranged from 6.73% to 13.3% . Compared to control significant improvement in soil Cu, Mn, Fe, and Zn was recorded in treatment T6 (½NPK+SOB+SRB) with highest values of 0.32, 1.49, 2.54, and 1.33 mg kg − 1 , respectively . Isolated S-oxidizing bacteria was Thiobacillus thiooxidans , a gram-negative chemo-lithotroph bacteria. They utilize S 2 O 3 2 − and sulfide as energy sources to produce sulphuric acid. These are aerobic sulfur bacteria and they derive energy from the oxidation of sulfide or elemental sulfur (S 0 ) to SO 4 − 2 . The isolated S-reducing bacteria was Desulfvibrio vulgaris, Gram-negative, non-spore-forming, anaerobic, curved rod-shaped PGPR. Plant attributes like root, shoot, and plant biomass of canola differed significantly with bacterial inoculation in soil. Results indicated that inoculation of SOB and SRB enhanced canola biomass compared with sole NPK application. The highest shoot length of 100 cm and the highest root length of 26.8 cm were noted in T6 (½ NPK+SOB+SRB) . The highest fresh root weight (g plant − 1 ) of 2.58 was recorded in T5 (½ NPK + SRB), while the highest fresh shoot weight (g plant − 1 ) of 28.6 was noted in T6 (½ NPK + SOB + SRB). The highest dry root weight (g plant − 1 ) of 2.03 was noted in T6 (½ NPK + SOB + SRB), while the highest dry shoot weight (g plant − 1 ) of 15.9 was obtained in T6 (½ NPK + SOB + SRB) . The concentration of nutrients in plant tissue at maturity depicted a significant response to applied treatments. The highest total-N (1.53%), K (2.80%) concentration was recorded in T6 (½ NPK+SOB+SRB) while the highest P (1.49%) concentration was recorded in T2 (full NPK) as reported in . The highest total-S (0.21%) was recorded in T6 (½ NPK+SOB+SRB) followed by treatment 1/2 NPK+SOB (0.18%) while the lowest total-S (0.043%) was recorded for the control treatment . A blend of ½ NPK synthetic fertilizer with SOB and SRB (T6) improved Mn significantly (0.06 g kg − 1 ), Fe (0.023 g kg − 1 ), Zn (0.045 g kg − 1 ), and Cu (0.092 g kg − 1 ) contents in plants tissue as depicted in . Inoculation of SOB and SRB with half NPK, improved nutrient contents in canola crops significantly, which suggests these microorganisms’ role in nutrient mobilization. Post-harvested soil amended through bacteria significantly affected pH and OM. The decreasing pattern was shown in pH with various applied treatments, ranging from 7.5 in the control treatment to 7.1 in T4 (1/2 NPK + SOB) closely followed by T6 (7.2) ½ NPK+SOB+SRB . However, OM and EC were increased slightly by different treatment applications. Soil OM content was recorded at 0.50% in control and 0.61% in treatment T6 (1/2 NPK+SOB+SRB). Soil EC was increased from 0.251 dS m − 1 in T1 (control) treatment to 0.492 dS m − 1 in T6 (1/2 NPK+SOB+SRB) . Available-P concentration significantly improved over control (0.057 g kg − 1 ) with treatments applied up to the highest rate of 0.025 g kg − 1 at T6 (½NPK+SOB+SRB). Soil extractable-K and total-N also improved. Total-N varied between 0.56% to 1.53% showing that total-N was slightly short in post-harvest soil. Soil extractable-K varied from 1.31% to 1.56%. Soil S ranged from 6.73% to 13.3% . Compared to control significant improvement in soil Cu, Mn, Fe, and Zn was recorded in treatment T6 (½NPK+SOB+SRB) with highest values of 0.32, 1.49, 2.54, and 1.33 mg kg − 1 , respectively . The present study reinforced the above-stated hypothesis and revealed that the application of SOB and SRB along with chemical fertilizer highly influenced the physicochemical attributes of soil, enhanced canola growth, and increased bioavailability of nutrient contents in soil. Findings of the current study are in accordance with literature published previously , which reported that improvement in crop parameters is because of the release of bacterial metabolite and nutrient mineralization. The increase in crop shoot-root length and plant biomass was owing to the production of exopolysaccharides, siderophores, and phytohormones, and enzyme activation by Leptothrix discophora and Bacillus polymyxa . Biofertilizer application enhances the growth of plants by improving the availability of nutrients in the rhizosphere through the production of antibiotics and by hindering pathogenic bacteria growth . Inoculation of SOB and SRB with synthetic fertilizer significantly improved growth and yield attributes of canola compared to control . Our findings also revealed that nutrient augmentation was much higher when the SOB and SRB inoculants were applied with half NPK recommended dose however, other studies , found limited results with similar microorganisms in acidic soils, and the difference in results may be attributed to differences in soil characteristics and environmental conditions. Application of Bacillus spp. improved micronutrients, viz., Mn, Zn, and Fe in plants . Soil microbes contribute to solubility and henceforth improve soil micronutrients . The impact of PGPR on plant production is well documented and has been attributed to the synthesis of phytohormones and a greater supply of nutrients . Microbes use several methods to enhance the solubility of nutrients in soil like as altering plant metabolism and changing root exudates . Microbial inoculation of Bacillus mucilaginous and Bacillus megaterium improved plant growth . Oxidation and reduction of S by microbes are the most active processes in the S-cycle carried out by SOB and SRB and are considered vital phenomena in S biogeochemical cycling. Generally, on a nutritional basis, SRB and SOB are characterized as litho-autotrophs. Reduced compounds of S are oxidized by SOB like H 2 S, S 0 , sulfite (SO 3 − 2 ), S 2 O 3 2 − , and SnO 6 2 − or -SnO 6 − into SO 4 − 2 . However, SO 4 − 2 serves as an SRB electron acceptor in anaerobic conditions and reduces SO 4 − 2 and other S compounds (S 2 O 3 2 − , SO 3 − 2 , S 0 ) into H 2 S. Moreover, in a natural ecosystem, SRB reduces SO 4 − 2 through assimilatory and dissimilatory reactions. SRB utilizes various types of enzymes in dissimilatory reactions to reduce S substrate, while SO 4 − 2 is assimilated in organic compounds via an assimilatory process through S substrate reduction. The soil used in the present study was slightly calcareous; both SOB ( Thiobacillus thiooxidans ) and SRB ( Desulfvibrio vulgaris ) inoculation significantly reduced soil pH. This might be because of organic acid production in the rhizosphere by soil microorganisms. Bacteria produce the few organic acids in soil called carboxylic acids , which lower soil pH in the rhizosphere and dissociate calcium phosphate bonds in calcareous soils. Furthermore, microbes modify redox potential and surrounding medium pH . Our results are also in line with , who stated that a living organism’s presence in soil produces pH variation and momentous redox potential within the soil. SOB or SRB has an important role as they significantly affect pH and redox potential . Plant root’s exudation of protons (H + ), carboxylates, and enzymes also affects soil pH . SOB and SRB improve fertility of the soil by regulating pH and EC. During this study, microbial inoculate application slightly increased OM content of the soil. Bacillus polymyxa and Thiobacillus thiooxidans increase OM in soil via the release of numerous exopolysaccharides that break down large polymeric substances which in turn offer nutrients to plants . Nutrient availability is controlled by pH and redox conditions . Additionally, soil microfauna is also a key factor for nutrient dynamics in the soil . We noted that soil post-harvest nutrients (macro- and micro) responded greatly to SOB and SRB inoculation in combination with ½ NPK fertilizer. The highest uptake of nutrients by plants was recorded at pH 7.0, even with the lowest chemical fertilizer dose. Similar results have been reported by other researchers. According to , rhizosphere acidification and salinization are the most significant factors affecting the availability of nutrients in soil. Inoculation of seeds with Leptothrix discophora and Bacillus polymyxa bacteria enhanced solubilization as well as mineralization of macronutrients . Several bacterial species play a vital role in increasing soil fertility through increasing OM, which enhances the availability of macronutrients in soil . Moreover, by producing organic acids they contribute to nutrient mobilization and uptake by plants. The population of SOB and SRB is a key factor in the availability of soil S for plants . Microorganisms mobilize S in aerated soils via SOx reduction that promotes H + root excretion, decreases the pH of saline soils, and improves the availability of micronutrients . Availability of soil S is affected by the pH of the soil and improves greatly owing to decreases in pH. It could be due to S oxides becoming stronger oxidants when pH of the soil decreases, resulting in more easily reduced S ions . However, excess application of S may acidify the soil, decreasing the availability of P by promoting their fixation which enhances competition between phosphate and SO 4 − 2 ions. Further, it enhances K leaching, particularly in soils with lower CEC, and disrupts nutrient approval balance by favouring SO 4 − 2 absorption. Eventually, degrading soil health reduces the efficiency of P and K, which necessitates supplementary fertilizer to correct this imbalance. Farmers can take advantage of the current study by exploiting the microbial redox cycling of S to improve the bioavailability of nutrients, increasing Brassica napus growth and yield while potentially decreasing fertilizer costs. Findings of the current study revealed that the application of bacteria (S-oxidizing and S-reducing) with ½ NPK fertilizer rendered higher S availability as well as nutrients to canola by decreasing S immobilization in soil. Treatment ½ NPK+SOB+SRB improved soil N, P, K, and SO₄ by 15.9%, 38%, 2.0%, and 72%, respectively, and enhanced plant N, K, and SO₄ by 7.7%, 31%, and 239%, compared to full NPK. Additionally, ½ NPK+SOB showed the highest pH reduction (4%). Furthermore, significant improvement in the fertility status of soil was noted. In conclusion, these results suggest that combined application of synthetic fertilizers along with SOB and SRB inoculation as a soil amendment improves plant growth attributes, and nutrients in plants and soil. It is easily adaptable by farmers and eco-friendly methods to reduce crop nutrient rations, high-yield production, and sustain satisfactory profit. The study highlighted the benefits of SOB and SRB inoculation, suggesting future research on their effectiveness across soil types, climatic conditions, and interactions with fertilizers to optimize resource use in canola cultivation. |
Molecular features of the serological IgG repertoire elicited by
egg-based, cell-based, or recombinant haemagglutinin-based seasonal influenza
vaccines: a comparative, prospective, observational cohort study | a906060b-f0dd-46d5-9267-016015b5ba38 | 11807745 | Biochemistry[mh] | Seasonal influenza vaccination has been shown to attenuate the severity of symptomatic illness; however, the overall vaccine effectiveness of licensed influenza vaccines remains suboptimal, with only 35·7% average vaccine effectiveness in the past 10 years in the USA, according to the US Centers for Disease Control and Prevention. Among the influenza A viruses, the A/H3N2 strain is of particular importance due to its higher rate of antigenic drift than the A/H1N1 strain and associated lower vaccine effectiveness. Since the first report on growing influenza virus in embryonated chicken eggs in the 1930s, egg-produced viruses have been used to produce influenza vaccines. Drawbacks of egg-based inactivated quadrivalent seasonal influenza vaccine (eIIV4) include allergic responses to egg products and, separately, the need to introduce mutations around the haemagglutinin (HA) receptor binding site needed for propagation in chicken eggs and binding to avian receptors, namely α-2,3 linked sialic acids, as opposed to α-2,6 linked sialic acids in humans. Egg adaptation mutations in the eIIV4 vaccine can elicit antibodies that are unable to bind human influenza strains. , More recently, we and others have shown that eIIV4 immunisation directs part of the antibody response towards avian antigens, such as sulfated type Galβ1–4GalNAcβ avian glycans, which are prevalent in egg allantoic fluid. , To overcome these limitations, two alternate vaccine production platforms have been developed: inactivated subunit vaccines from virus grown in Madin–Darby Canine Kidney cells (cell culture-based inactivated quadrivalent seasonal influenza vaccine [ccIIV4]) and recombinant HA vaccines produced in insect cells (recombinant HA-based quadrivalent seasonal influenza vaccine [RIV4]), with ccIIV4 approved by the US Food and Drug Administration in November, 2012, and RIV4 in January, 2013. Some advantages of mammalian and insect cell tissue culture systems are their shorter vaccine production timelines and the lack of egg-adapted mutations and egg antigens. Similar to eIIV4, ccIIV4 vaccines use a standard dose of 15 μg HA per strain, whereas RIV4 vaccines are exclusively formulated with three times higher dose of HA (45 μg) per strain. Although eIIV4, ccIIV4, and RIV4 have been in use for over 10 years, detailed comparative studies of vaccine effectiveness and serological responses are scarce. Izurieta and colleagues , reported that ccIIV4 showed only minor improvement in relative vaccine effectiveness compared with eIIV4, and other studies found a similar titre of neutralising serum antibodies by ccIIV4 and eIIV4 immunisation against all four vaccine strains. , Recently, Dawood and colleagues reported that RIV4 elicited higher neutralising antibody titres to A/H1N1, A/H3N2, and B/Yamagata strains than did standard-dose eIIV4 and, similarly, Wang and colleagues and Gouma and colleagues reported that RIV4 elicits broader H3N2 neutralisation breadth than either eIIV4 or ccIIV4. Importantly, a large clinical study of vaccine efficacy of more than 8000 adults aged 50 years or older (which provided key evidence used for licensure) revealed 30% higher protection against RT-PCR-confirmed influenza infections for RIV4 than for standard-dose eIIV4 during the H3N2-predominant 2014–15 season. Although it could be argued that the higher vaccine efficacy reported with RIV4 might be a consequence of the three times higher concentration of HA in this vaccine formulation (45 μg per strain) than in the standard-dose eIIV4 and ccIIV4 (15 μg per strain), this is not likely to be the case in light of the fact that with eIIV4, the Fluzone High-Dose Quadrivalent vaccine approved for the elderly (aged >65 years) has a four times higher HA dose (60 μg per strain), yet it does not result in higher neutralising antibody titres. To better understand how different vaccine platforms affect antibody clonal compositions and their respective quantities and qualities in the serum response, in this study we aimed to comprehensively profile the sequence identity, abundance, and binding affinity of H3/HA-specific circulating antibodies that comprise the polyclonal IgG serological repertoire in three vaccine cohorts who received either RIV4, eIIV4, or ccIIV4 during the 2018–19 influenza season. Study design and participants This comparative, prospective, observational cohort study is a preplanned exploratory analysis of the original randomised, open-label trial ( NCT03722589 ) involving 727 US health-care workers. We selected 15 female (mean age 47·6 years [SD 8]) trial participants who received either RIV4 (Flublok Quadrivalent by Sanofi Pasteur, Swiftwater, PA, USA; 45 μg of HA per strain), eIIV4 (Fluzone Quadrivalent by Sanofi Pasteur; 15 μg of HA per strain), or ccIIV4 (Flucelvax Quadrivalent by Seqirus, Holly Springs, NC, USA; 15 μg of HA per strain; n=5 per cohort) during the 2018–19 influenza season at Baylor Scott & White Health, Temple, TX, USA. Participants were excluded if they had experienced any previous hypersensitivity to influenza vaccines or received any vaccination within 4 weeks before and after the initial visit. Eligible individuals were selected based on comparable day 28 serum microneutralisation titres and similar vaccination history. This strategy for selecting individuals allows for a direct comparison of molecular features in anti-H3/HA serum repertoires induced by different vaccine platforms while mitigating confounding effects that could arise from comparing individuals with large variations in post-vaccination titres. Another consideration in the inclusion of individuals was the availability of sufficient amounts of sera and peripheral blood mononuclear cells, as required for B-cell receptor sequencing (BCR-Seq) and immunoglobulin sequencing (Ig-Seq). 15 was the maximum number of individuals we could analyse, given sample availability and the cost of immunoglobulin sequencing experiments. Participants provided written informed consent before enrolment and trial participation. Baseline characteristics for all individuals were collected from electronic medical records. Investigators were blinded to vaccine groups until the completion of the study. Participants had serum (day 0 and day 28) and peripheral blood mononuclear cells (day 0 and day 7) collected before and after vaccination ( p 25). Procedures Serum microneutralisation assays were performed using cell-grown A/Singapore/INFIMH-16–0019/2016 viruses propagated in MDCK-SIAT1 cells (MilliporeSigma, Burlington, MA, USA; p 3). The microneutralisation titres were measured and reported by Dawood and colleagues as primary endpoints, which in turn guided donor selection in our preplanned exploratory analysis. Serum IgG binding titres were determined using ELISA to recombinant A/Singapore/INFIMH-16–0019/2016 HA ( p 3). Circulating T-follicular helper cells (CD4 + CXCR5 + PD1 + CD25 − ) were identified by multiparametric fluorescence-activated cell sorting using fluorescent-labelled antibodies ( p 3). We used the serum proteomics workflow, Ig-Seq, which capitalises on liquid chromatography–tandem mass spectrometry (LC-MS/MS)-based serum proteomics combined with subject-specific, natively paired sequencing of variable heavy chains (VH)–variable light chains (VL) in peripheral B cells that provides a database for mass spectra interpretation and full-length antibody sequences, which can in turn be recombinantly produced for biochemical and functional characterisation . The VH-only or VH–VL paired high-throughput BCR-Seq was performed using bulk and single-cell day 7 circulating B-cell sequencing, respectively, as previously described ( pp 4–6). HA-binding antibodies were isolated from IgG plasma by affinity chromatography with immobilised A/Singapore/INFIMH-16–0019/2016 HA and analysed by LC-MS/MS, as described previously ( pp 6–7). , , The mass spectrometry search identified peptide spectra matches originating from heavy-chain complementarity determining region 3 (CDRH3) sequences, and the abundance of each clonotype was calculated by summing the extracted ion chromatogram (XIC) peak area of CDRH3 peptides mapped to a given clonotype ( pp 7–9). We selected monoclonal antibodies for which high-confidence CDRH3 peptides in serum were identified by LC-MS/MS at high abundance (as determined by XIC area), along with high peptide coverage of the VH, especially for the complementarity determining regions ( pp 26–27). The binding affinity of recombinant monoclonal antibodies was determined by ELISA against A/Texas/50/2012 and A/Singapore/INFIMH-16–0019/2016 HA ( pp 9–10). The high-throughput multiplex influenza antibody detection assay was conducted using multiplexed microsphere beads containing a broad panel of H3/HAs and nucleoprotein ( pp 4, 28). The binding kinetics of UT14 and its competition with known monoclonal antibodies were determined using biolayer interferometry ( pp 10–11). Cryo-electron microscopy structure of UT14 Fab in complex with A/Singapore/INFIMH-16–0019/2016 HA was determined using FEI Titan Krios G3 300kV cryo-EM (Thermo Fisher Scientific, Waltham, MA, USA) with a K3 direct detection camera ( pp 11, 31). Full details on sources and identifiers of reagents used in this study are in the (pp 3–11). Outcomes The primary exploratory outcome of this study was to compare the molecular composition of the HA-specific IgG antibody repertoire after vaccination by RIV4, eIIV4, or ccIIV4. As key secondary outcomes, the level of back-boosting, molecular features of serum clonotypes, binding affinity of representative monoclonal antibodies, HA serum-binding landscape against time-ordered H3 HA variants, correlation of antibody repertoire features with circulating T-follicular helper cell frequencies, and stereotypical B-cell receptor responses were evaluated. An additional exploratory outcome involved analysing the biochemical and structural features of an unusual near-stereotypical monoclonal antibody, which was detected at a high abundance in serum. Statistical analysis For multiple comparisons, ordinary one-way ANOVA tests followed by Tukey’s post-hoc tests, Welch’s ANOVA tests followed by Dunnett’s T3 post-hoc tests, or Kruskal–Wallis tests followed by Dunn’s post-hoc tests were used based on the assessment of normality and homogeneity of variance assumptions ( p 11). Unpaired or paired comparisons between two groups were conducted using the two-sided Mann–Whitney U or Wilcoxon matched-pairs signed rank tests, respectively. The Pearson correlation tests were conducted using Scipy python package version 1.9.1. The Tukey-style box-and-whisker plot was drawn using default geom_boxplot function by ggplot2 version 3.4.2. All raw data points are shown in the box-and-whisker or violin plot. Data are presented as median with 95% CI estimates or mean (SD). Statistical analyses were conducted using GraphPad Prism version 10.2.1 using a threshold for significance of p<0·05. Role of the funding source The funders of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report. This comparative, prospective, observational cohort study is a preplanned exploratory analysis of the original randomised, open-label trial ( NCT03722589 ) involving 727 US health-care workers. We selected 15 female (mean age 47·6 years [SD 8]) trial participants who received either RIV4 (Flublok Quadrivalent by Sanofi Pasteur, Swiftwater, PA, USA; 45 μg of HA per strain), eIIV4 (Fluzone Quadrivalent by Sanofi Pasteur; 15 μg of HA per strain), or ccIIV4 (Flucelvax Quadrivalent by Seqirus, Holly Springs, NC, USA; 15 μg of HA per strain; n=5 per cohort) during the 2018–19 influenza season at Baylor Scott & White Health, Temple, TX, USA. Participants were excluded if they had experienced any previous hypersensitivity to influenza vaccines or received any vaccination within 4 weeks before and after the initial visit. Eligible individuals were selected based on comparable day 28 serum microneutralisation titres and similar vaccination history. This strategy for selecting individuals allows for a direct comparison of molecular features in anti-H3/HA serum repertoires induced by different vaccine platforms while mitigating confounding effects that could arise from comparing individuals with large variations in post-vaccination titres. Another consideration in the inclusion of individuals was the availability of sufficient amounts of sera and peripheral blood mononuclear cells, as required for B-cell receptor sequencing (BCR-Seq) and immunoglobulin sequencing (Ig-Seq). 15 was the maximum number of individuals we could analyse, given sample availability and the cost of immunoglobulin sequencing experiments. Participants provided written informed consent before enrolment and trial participation. Baseline characteristics for all individuals were collected from electronic medical records. Investigators were blinded to vaccine groups until the completion of the study. Participants had serum (day 0 and day 28) and peripheral blood mononuclear cells (day 0 and day 7) collected before and after vaccination ( p 25). Serum microneutralisation assays were performed using cell-grown A/Singapore/INFIMH-16–0019/2016 viruses propagated in MDCK-SIAT1 cells (MilliporeSigma, Burlington, MA, USA; p 3). The microneutralisation titres were measured and reported by Dawood and colleagues as primary endpoints, which in turn guided donor selection in our preplanned exploratory analysis. Serum IgG binding titres were determined using ELISA to recombinant A/Singapore/INFIMH-16–0019/2016 HA ( p 3). Circulating T-follicular helper cells (CD4 + CXCR5 + PD1 + CD25 − ) were identified by multiparametric fluorescence-activated cell sorting using fluorescent-labelled antibodies ( p 3). We used the serum proteomics workflow, Ig-Seq, which capitalises on liquid chromatography–tandem mass spectrometry (LC-MS/MS)-based serum proteomics combined with subject-specific, natively paired sequencing of variable heavy chains (VH)–variable light chains (VL) in peripheral B cells that provides a database for mass spectra interpretation and full-length antibody sequences, which can in turn be recombinantly produced for biochemical and functional characterisation . The VH-only or VH–VL paired high-throughput BCR-Seq was performed using bulk and single-cell day 7 circulating B-cell sequencing, respectively, as previously described ( pp 4–6). HA-binding antibodies were isolated from IgG plasma by affinity chromatography with immobilised A/Singapore/INFIMH-16–0019/2016 HA and analysed by LC-MS/MS, as described previously ( pp 6–7). , , The mass spectrometry search identified peptide spectra matches originating from heavy-chain complementarity determining region 3 (CDRH3) sequences, and the abundance of each clonotype was calculated by summing the extracted ion chromatogram (XIC) peak area of CDRH3 peptides mapped to a given clonotype ( pp 7–9). We selected monoclonal antibodies for which high-confidence CDRH3 peptides in serum were identified by LC-MS/MS at high abundance (as determined by XIC area), along with high peptide coverage of the VH, especially for the complementarity determining regions ( pp 26–27). The binding affinity of recombinant monoclonal antibodies was determined by ELISA against A/Texas/50/2012 and A/Singapore/INFIMH-16–0019/2016 HA ( pp 9–10). The high-throughput multiplex influenza antibody detection assay was conducted using multiplexed microsphere beads containing a broad panel of H3/HAs and nucleoprotein ( pp 4, 28). The binding kinetics of UT14 and its competition with known monoclonal antibodies were determined using biolayer interferometry ( pp 10–11). Cryo-electron microscopy structure of UT14 Fab in complex with A/Singapore/INFIMH-16–0019/2016 HA was determined using FEI Titan Krios G3 300kV cryo-EM (Thermo Fisher Scientific, Waltham, MA, USA) with a K3 direct detection camera ( pp 11, 31). Full details on sources and identifiers of reagents used in this study are in the (pp 3–11). The primary exploratory outcome of this study was to compare the molecular composition of the HA-specific IgG antibody repertoire after vaccination by RIV4, eIIV4, or ccIIV4. As key secondary outcomes, the level of back-boosting, molecular features of serum clonotypes, binding affinity of representative monoclonal antibodies, HA serum-binding landscape against time-ordered H3 HA variants, correlation of antibody repertoire features with circulating T-follicular helper cell frequencies, and stereotypical B-cell receptor responses were evaluated. An additional exploratory outcome involved analysing the biochemical and structural features of an unusual near-stereotypical monoclonal antibody, which was detected at a high abundance in serum. For multiple comparisons, ordinary one-way ANOVA tests followed by Tukey’s post-hoc tests, Welch’s ANOVA tests followed by Dunnett’s T3 post-hoc tests, or Kruskal–Wallis tests followed by Dunn’s post-hoc tests were used based on the assessment of normality and homogeneity of variance assumptions ( p 11). Unpaired or paired comparisons between two groups were conducted using the two-sided Mann–Whitney U or Wilcoxon matched-pairs signed rank tests, respectively. The Pearson correlation tests were conducted using Scipy python package version 1.9.1. The Tukey-style box-and-whisker plot was drawn using default geom_boxplot function by ggplot2 version 3.4.2. All raw data points are shown in the box-and-whisker or violin plot. Data are presented as median with 95% CI estimates or mean (SD). Statistical analyses were conducted using GraphPad Prism version 10.2.1 using a threshold for significance of p<0·05. The funders of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report. We selected 15 female health-care personnel (n=5 per vaccine cohort) who were enrolled in a large clinical trial for cohorts RIV4 (participants A1–5; mean age 47·8 years [SD 4·7]), eIIV4 (participants B1–5; 46·8 years [9·5]), and ccIIV4 (participants C1–5; 48·2 years [11·3]). Participants received a single dose of RIV4, standard-dose eIIV4, or ccIIV4 during the 2018 September-to-October period of the 2018–19 influenza season . We chose our study cohorts to have statistically similar serum micro-neutralisation and ELISA binding titres to A/Singapore/INFIMH-16–0019/2016 (H3N2) virus on day 28 after vaccination . Serum IgG ELISA binding titres significantly correlated with the serum microneutralisation titres to the vaccine H3 strain (p=0·0001, Pearson r =0·64 [95% CI 0·21-0·86]) . Ig-Seq serum proteomics analysis of A/Singapore/INFIMH-16–0019/2016 H3/HA affinity-purified IgGs showed that all three vaccines elicited a highly polarised serological repertoire, dominated by back-boosted antibodies that were also detectable at day 0 (median percentage preexisting: 98% [95% CI 23–100] for RIV4, 98% [89–99] for eIIV4, 92% [23–100] for ccIIV4; p=1·0 for all multiple comparisons; , pp 12–13). The serological repertoires comprised a few highly abundant clonotypes, with the top three most abundant clonotypes accounting for a median 58% (95% CI 46–69) of the post-vaccination repertoire by abundance. The back-boosted (pre-existing) antibodies constituted a median 98% (95% CI 88–99) of the anti-H3/HA serum response, with no significant differences observed among the three vaccine cohorts . Interestingly, two individuals (participant identifiers A5 in the RIV4 cohort and C4 in the ccIIV4 cohort) had an unexpectedly 3·8-times lower fraction of pre-existing antibodies at day 28 (28% and 23% by abundance, respectively) compared with the other 13 individuals who had a median 98% (95% CI 89–99) of back-boosted antibodies in their sera. We noticed that these two individuals, compared with the rest of the cohort, had a significantly higher increase in serum nucleoprotein (NP) titre by a median 3·2 times (95% CI 3·0–3·4) on day 28 comparedwith day 0, whereas the other individuals did not have appreciable changes in NP titre between the two timepoints (differences in NP titre change, p=0·02; ; p 14). This finding suggests that the two donors with atypical fractions of pre-existing antibodies and NP titres might have had subdinical influenza infection around the time of vaccination, given that they received recombinant HA-based and inactivated subunit vaccines. We analysed the molecular features of the serological IgG repertoire associated with different vaccine platforms, specifically among the 13 donors who showed no sign of infection. For these individuals, the clonal composition and the extent of repertoire polarisation on day 28 versus day 0 were not influenced by the type of vaccination received, as measured by D80 diversity index (p>0·05; ). Furthermore, the IgG antibodies comprising the anti-H3/HA serum response had the same molecular features in terms of VH somatic hypermutation, CDRH3 hydrophobicity, and the CDRH3 amino acid length across all three vaccine cohorts (p>0·05; – ). In the case of the typical donors, pre-existing antibodies had a higher level of VH somatic hypermutation compared with newly elicited antibodies (p=0·030; p 15) for all vaccine cohorts. Given the indistinguishable features of the anti-H3/HA serum repertoires, we examined whether molecular characteristics of the day 28 repertoire correlate with immunological parameters regardless of the vaccine received. Across all 15 individuals, serum antibodies encoded by IGHV4–59, IGHV3–30, IGHV1–69, IGHV4–31, and IGHV4–39 were used with higher frequency than other VH gene families . IGHV1–2, IGHV5–51, and IGHV1–18 showed higher serum abundance when calculated by LC-MS/MS XIC peak area, albeit less frequently ( p 16). Additionally, we found that a reduction in the repertoire diversity (in other words, an increase in polarisation) correlated with an increase in circulating follicular helper T-cell frequency at day 7 versus day 0, although this observation did not reach statistical significance (p=0·068; p 17). To compare the quality of monoclonal antibodies identified in the serum repertoire among different vaccine cohorts, we recombinantly expressed antibodies representative of dominant serum clonotypes ( p 18). Although monoclonal antibodies from all three vaccine cohorts had similar levels of VH somatic hypermutation (p>0·05; p 18), we found that the monoclonal antibodies induced by RIV4 had a substantially higher affinity to the current vaccine A/Singapore/INFIMH-16–0019/2016 HA and A/Texas/50/2012 HA used in the preceding 2014–15 season than did those induced by the other two vaccines ( – , p 18). For the RIV4 cohort, the median half-maximal effective concentration (EC50) of monoclonal antibodies was 0·037 μg/mL (95% CI 0·012–0·12) and 0·037 μg/mL (0·017–0·32) for A/Singapore/INFIMH-16–0019/2016 and A/Texas/50/2012 HAs, respectively, which is approximately two orders of magnitude (30 to 500 times) lower than the median EC50 of monoclonal antibodies induced by either eIIV4 or ccIIV4 (H3 Singapore, 4·43 μg/mL [95% CI 0·030–100] for eIIV4, 18·50 μg/mL [0·99–100] for ccIIV4; H3 Texas, 1·10 μg/mL [0·045–100] for eIIV4, and 12·63 μg/mL [1·83–100] for ccIIV4; – ). Notably, we found that higher affinity monoclonal antibodies boosted by RIV4 contributed a significantly larger fraction of the serum response than those elicited by eIIV4 or ccIIV4 . There was no significant difference in the quality of monoclonal antibodies constituting the serum response in the eIIV4 and ccIIV4 cohort . We analysed the binding landscapes of bulk serum and also of top-dominant monoclonal antibodies (ie, detected at high concentrations in the serum) against a time-ordered panel of H3/HAs via multiplexed Luminex assay. In two individuals (A4 and B5) for whom we detected dominant clonotypes that accounted for more than 50% of the anti-H3/HA serum response, the binding landscape for these two dominant clonotypes, M81 and M91, closely mirrored the binding landscape observed with whole serum. The concordance in the binding pattern of sera and the dominant antibodies identified by Ig-Seq suggest that a single antibody lineage can largely dictate the functional properties of the polyclonal serum response . Deep scanning saturation mutagenesis could, in principle, further assist in delineating the role of non-dominant serum antibodies in shaping binding breadth and possibly viral escape. Additionally, consistent with the larger fraction of high affinity monoclonal antibodies boosted by RIV4, complete sera from RIV4 vaccine recipients had a substantially higher and broader increase in H3/HA binding response to contemporary strains that have been circulating after the year 2000 ( ; pp 19, 29), compared with eIIV4 and ccIIV4 recipients. Given that we generated a very large set (> 10 6 ) of antibody VH–VL paired sequences from day 7 total B cells, a population in which antigen-specific plasmablasts are highly enriched after influenza vaccination, we examined B-cell receptor clonotypes with known stereotypical HA binding sequence features within this dataset ( pp 20, 30). Although no significant differences were detected in the stereotypical B-cell responses targeting the central stalk, trimer interface, and group 1 and group 2 broadly neutralising stem epitopes, eIIV4 elicited a significantly higher frequency of canonical egg-glycan binding antibodies than did ccIIV4 (median 0·196% [95% CI 0·067–0·372] for eIIV4, 0·035% [0·000–0·062] for ccIIV4, p=0·0071; p 20). Furthermore, stereotyped B-cell receptors associated with binding to membrane-proximal anchor epitope were three times less frequent in the RIV4 cohort than in the ccIIV4 cohort, although the difference was not statistically significant (median 0·062% [95% CI 0·000–0·084] for RIV4, 0·181% [0·016–0·195] for ccIIV4, p=0·064; p 20). Interestingly, even though we detected multiple antibodies with stereotypical HA binding features in day 7 peripheral B cells, only two near stereotypical monoclonal antibodies, UT14 and M47 (a stereotypical trimer interface monoclonal antibody), were detected in the serum of two of the 15 individuals analysed. UT14 is a heterosubtypic monoclonal antibody that was found to be abundant in the serum of a ccIIV4 recipient and that possessed conserved anchor binding sequence features, such as the use of VH3–30, IGKV3–11, and IGKJ5 genes, along with Asn-Trp-Pro amino acid motif in the CDR3 of the kappa light chain (CDRK3; , ). However, UT14 has a nine amino acid-long CDRK3, as opposed to a ten amino acid-long CDRK3 seen in other antibodies of this class ( p 21). In addition to detecting the UT14 CDRH3 peptide that defines the lineage in the serum, we also detected by LC-MS/MS unique tryptic peptides that contain the Asn-Trp-Pro region from the CDRK3 region, further confirming its serological relevance . Biolayer interferometry competition assays revealed that UT14 does not compete with anchor nor stem monoclonal antibodies; however, it competes with known trimer interface monoclonal antibodies ( ; p 21). , – The higher binding affinity of UT14 Fab towards monomeric HA than trimeric HA indicates that the UT14 epitope could be less readily accessible in trimeric HA . We further obtained a 3·8Å resolution cryo-electron microscopy structure of UT14 with HA, revealing that this antibody buries 808Å 2 lateral surface on the H3 head at the interface between two H3 protomers of the trimer ( ; pp 22–24). Similar to the human monoclonal antibodies, FluA-20, H2214, and S5V2–29, and the murine monoclonal antibody, FL-1066, the UT14 Fab interacts with the HA 220-loop and 90-loop via both its heavy (220 loop) and light chains (220 and 90 loop) while demonstrating a distinct angle of approach relative to the aforementioned trimer interface antibodies . UT14 utilises its light chain extensively for HA recognition, a feature that distinguishes UT14 and the murine FL-1066 from the human FluA-20, H2214, and S5V2–29-monoclonal antibodies, and faces the HA in a similar orientation with FL-1066. Unlike the Asn-Trp-Pro motif critical for binding to the anchor epitope, P95 in the Asn-Trp-Pro residues of UT14 CDRK3 does not interact with the HA. Instead, CDRK3 residues, including R91 and Y92, along with the N93 and W94, engage in epitope contact ( p 24). Our findings suggest that for all three influenza A vaccines—RIV4, eIIV4, and ccIIV4—the serological repertoire was heavily shaped by back-boosting, with more than 80% of the antigen-specific clonal lineages found in serum having been elicited by previous exposures and thus detected at day 0. Due to this high degree of serological imprinting, vaccination with RIV4, eIIV4, or ccIIV4 results in serological anti-H3/HA clonotypes having similar repertoire diversity, VH somatic hypermutation, and CDRH3 features. These results are in line with the fact that A/Singapore/INFIMH-16–0019/2016 is antigenically similar to A/Hong Kong/4801/2014, which had been used as H3 vaccine strains in the two preceding 2016–17 and 2017–18 seasons, and that all individuals had extensive influenza vaccination records in the past 5 consecutive years. Furthermore, irrespective of the vaccine received, our findings support the notion that the serological response to H3/HA is driven by antibodies derived from a small set of select VH gene families. , The trend in correlation between circulating T-follicular helper cell responses and the degree of polarisation suggests that highly polarised serum response, likely to be derived from a few dominant expanded B cells, might require more robust help from circulating T-follicular helper cells. Importantly, from a clinical standpoint, we present data showing that the RIV4 vaccine preferentially boosts H3/HA-specific clonotypes that have much higher affinity for the vaccine HA as well as greater binding breadth to contemporary H3N2 strains than do eIIV4 and ccIIV4. Since we saw no statistical difference in the amount of HA-specific IgG that could be isolated from sera by affinity purification with immobilised H3/HA, we conclude that it is the preferential back-boosting of high-affinity monoclonal antibodies by RIV4 that is probably responsible for the greater increase in H3 serum landscape observed with RIV4 vaccination. Clinical serological testing for eIIV4, ccIIV4, and RIV4 , – cannot address the question of how antigenic and structural differences among the three licensed vaccines affect the composition and functional features of the serological repertoire at the molecular level. One recent study analysing plasmablast-encoded monoclonal antibodies reported that RIV4 elicited a broader homo-subtypic breadth relative to ccIIV4. However, the singlecell cloning from peripheral B cells, although immensely valuable, does not provide information about the antigen-specific serological repertoire that constitutes the polyclonal serum response for multiple reasons, including the fact that less than 5% of plasmablast-encoded antibodies are detectable in circulation and thus have a role in protection against viral infection. Extensive earlier studies and more recent mathematical modelling of the germinal centre reaction indicate a non-linear correlation between antigen dosage and antibody affinity, in which an optimally moderate dose, one that is not too high or too low, can lead to the production of high-affinity antibodies. , A recent structural study revealed that the RIV4 vaccine exclusively contains starfish-like HA multivalent structures consisting of five to 12 copies of HA trimers clustered together to form a transmembrane core. By contrast, the ccIIV4 and eIIV4 vaccines contain comparable or higher fractions of individually isolated HA trimers than HA multimers. Given that the multivalent presentation of immunogens has been shown to enhance antibody responses and increase the affinity of bound monoclonal antibodies, we speculate that RIV4 immunogen structure probably affects the binding affinity of RIV4-boosted monoclonal antibodies observed in our study. Furthermore, the crowding of HA stem domains within the starfish-like HA structures of RIV4 was estimated to occlude about 28% of stem epitopes due to steric clashes. This structural feature of the RIV4 immunogen might affect the ability to activate B cells binding to less accessible epitopes in the HA stem region, which is consistent with our finding of a decrease in the frequency of day 7 B-cell clonotypes with membrane-proximal anchor selective stereotypical features. By analogy, stabilised HIV-1 envelop antigens displayed on nanoparticles reduced the accessibility of epitopes proximal to the base of the antigen due to steric crowding with neighboring antigens on the nanoparticle surface. Multivalent display of antigens has been further shown to enhance trafficking to follicular dendritic cells and accumulation in germinal centres. Lastly, we report on UT14, a highly abundant monoclonal antibody, that has nearly all stereotypical features of membrane-proximal anchor antibodies reported by Guthmiller and colleagues, although it differs by having a nine amino acid-long CDRK3 instead of a canonical ten amino acid-long CDRK3. A closer inspection revealed that one more conserved proline adjacent to Asn-Trp-Pro (ie, Asn-Trp-Pro-Pro) is likely an essential feature of the anchor stereotype (Guthmiller JJ, personal communication). This additional Pro amino acid is 100% conserved in all anchor-binding antibodies and is likely critical for stabilising the Asn-Trp-Pro loop for epitope binding. Our findings thus highlight how antibodies with highly similar sequence features might have evolved to recognise distinct epitopes on the same HA antigen. The discovery of diverse heavy and light chain gene rearrangements in antibodies targeting the recurrent HA epitopes will continue refining our understanding of canonical and non-canonical antibody responses. There are several limitations in our study, including the small sample size (n=5 individuals per cohort) and the fact that the cohorts comprise female health-care personnel (mean age 47·6 years [SD8]) and thus are not representative of diverse populations with different baseline characteristics, such as age, sex, ethnicity, or health status. Additionally, the anti-H1/HA serological repertoire could not be analysed due to limitations in the amount of serum and peripheral blood mononuclear cells that had been obtained under the institutional review board protocol. Further studies are needed to determine how the vaccine platform-specific repertoires are shaped in a diverse cohort of individuals and for different HA subtypes. Nevertheless, our finding of back-boosted antibodies dominant in all three vaccine cohorts, along with the prevalence of high-affinity monoclonal antibodies boosted by RIV4, points to strategies for designing more efficacious vaccines. Supplementary Material |
FERTILITY CARE IN LOW AND MIDDLE INCOME COUNTRIES: Embryologists’ practices of care in IVF-clinics in sub-Saharan Africa | 51e84f3a-f42e-4e0e-a795-f41406e15aa7 | 11792113 | Anatomy[mh] | Embryologists are vital to in vitro fertilization (IVF) success, yet there is relatively little literature on the nature of their work. The authors of one editorial suggest that ‘the embryologist has always been considered a highly skilled “artisan of life”, extensively trained to master sensitive microscale procedures where the margin for error is close to zero’, and note their various roles other than technical . A summary of the ‘modern’ embryologists’ work suggests that they undertake a multiplicity of tasks – not only as technical experts but also as managers, researchers, collaborators, scholars, communicators and mentors . The importance of one or another role may depend on their specific position and experience, but ‘embryologists’ efficacy behind the scenes reflects positively on the success of the fertility clinic’ . Hence, although invisible and ‘behind the scenes’ , the work of embryologists is intense, collaborative and stressful. One study of embryologists found that 59% of UK and 62% of US embryologists reported high ‘burnout’, stress and occupational challenges . Little is known about the roles and experiences of embryologists in IVF clinics in the global south. Some anthropologists have reported on embryologists’ transnational mobility, as embryology was – and still is – a scarce expertise in many places in the global south, while the demand for IVF is constantly growing . showed how, in Uganda and Ghana, these ‘transnational arrangements affect the local appropriation of laboratory procedures, protocols, and practices in various way’ (p. 69), and delved into the relationship between clinicians and embryologists, where the latter – referred to as ‘biologists’ in the Mexican context – felt not ‘as much taken into account as much as they should’ (p. 39). Efforts to define the role, status and training needs of embryologists are ongoing . In looking at the role of embryologists in ensuring quality care, Kathryn Go describes them as ‘the most valuable and critical asset of an assisted reproduction technique laboratory’ and notes that ‘through their hands, safe conduct of patients’ gametes and embryos is achieved’ . She highlights the combination of technical skills – ‘the craft’ of embryology – with various administrative or regulatory compliance activities and lists a long range of responsibilities within clinics. These include the preparation and quality testing of materials and labware; preparation of gametes for transfers; cryopreservation and thawing procedures; embryo transfers; sperm manipulation, preparation and storage; training of new embryologists; biopsies; and retrospective data analysis. In addition, they are responsible for the maintenance of the laboratory, including instruments, equipment, supplies and temperature; record-keeping of all treatment cycles; education of clinical staff and patients about procedures; compliance work with accreditation authorities; and reporting of clinical data . Above all, embryologists carry a unique responsibility for the ‘moral objects’ of human embryos demanding meticulous attention and risk avoidance in their work. At any point, they can succeed or fail through technical mishap, neglect or carelessness. The care-work and emotional labour undertaken by embryologists – to care for embryos, oocytes and sperm, and patients – is highlighted in a study in New Zealand on the work of ‘biological scientists’ in human embryology and assisted reproduction. In this study, the tasks of medical scientists and embryologists are divided into a five-fold ‘object of care’: clients, reproductive material, the scientific and bureaucratic system that underpinned their work, the quality of the team dynamics and each scientist’s own internal state or ‘fitness to work’ . In New Zealand, these scientists were strongly encouraged to make personal contact with their clients to convey results and explain procedures, rather than to work anonymously in a remote laboratory with decontextualized reproductive material ; this explains their engagement with and commitment to care for patients in the first place. Furthermore, the emphasis on the other four objects of care was related to the idea that they were working with ‘precious’ material, referring in particular to oocytes and embryos; mistakes with such irreplaceable material were simply not an option . Other than taking care of these materials, embryologists took care of the scientific and bureaucratic processes underpinning the practice of the clinic, laboratory team dynamics and their own internal state of mind. They also undertook many aspects of the emotional labour considered important to high-quality patient-centred care: counselling patients, conveying bad news, trying to impart hope and managing ‘difficult’ patients . In this article, we likewise consider the work of embryologists through the lens of care, building upon and expanding the understandings of the work of embryologists as care-work. However, in a slightly different approach to the above categorization , we suggest care is enacted and co-produced through the interaction of people and things, an approach used in science and technology studies and material semiotics . This approach considers how people and material objects shape each other through relationships, which gain meaning as they are situated in practices and vary in different contexts. Instead of only describing how embryologists care, we consider how embryologists and the practices of their work enact care and are mutually shaped in the process. This allows us to consider how tasks, technologies and people – patients and other staff – together enact care within an IVF clinic. The approach, captured in ethnographic descriptions of IVF clinics, highlights the ontological choreography of multiple actors and technologies in the provision of care . Across sub-Saharan Africa (SSA), there is a shortage and maldistribution of IVF clinics. It is estimated that 1500 assisted reproduction cycles per million infertile people are required in SSA to meet present needs, but in 2020, only 87 cycles per million took place . The International Federation of Fertility Societies identified some 210 clinics in SSA, the majority in South Africa (40), Nigeria (96), Ghana (18) and Kenya (11) . Almost all clinics offering IVF in SSA are private clinics, and as a result, ARTs are not affordable for most people experiencing fertility problems in these countries . Only a few initiatives of publicly funded IVF in SSA countries (Nigeria, Mali and Uganda) have been reported ; in South Africa, only three public academic clinics offer a limited number of subsidized IVF cycles . Expanding IVF care across the continent is difficult, given the limited number of clinical and laboratory staff with the necessary expertise. In particular, there exist a shortage of embryologists, challenges in providing training for them and difficulties in retaining experienced staff due to a ‘brain drain’ to other countries. Training options for embryologists differ across countries. In South Africa, stringent selection and training for embryologists is observed (A Whittaker & T Gerrits, personal communication). Medical biological scientists can enrol in any of 12 different training programmes (such as genetic counselling, medical physics or microbiology) at seven different universities with medical faculties or can train at any one of six SANAS (Health Professions Council of South Africa)-accredited medical institutes or diagnostic laboratories. However, reproductive biology training is provided at only two institutions. Medical scientists with a four-year degree in science may enrol in a 24-month prescribed evidence-based internship in reproductive biology at one of two authorized academic ART laboratories (under the auspices of the Medical and Dental Board). Clinical technologists complete a two-year training in basic sciences at one of three universities of technology, then specialize in reproductive biology at various authorized ART laboratories. Certification of Independent Practice by the Health Professions Council of South Africa as a clinical embryologist is needed to practise as an embryologist. Overall, at the time the study took place, the numbers of people training were small; within public institutions, there were only nine new biological scientists being trained in reproductive biology at two hospitals (Steve Biko Hospital and Tygerberg Hospital) connected, respectively, with the University of Pretoria and the University of Stellenbosch (A Whittaker & T Gerrits, personal communication). In this article, we draw on our work on the emerging IVF industry in SSA, during which we observed the multiple tasks and work of embryologists that supplement their laboratory-technical tasks. Below, we first present the motivations of embryologists in SSA. We show their high level of engagement and commitment, noting the diversity of their roles and tasks . As we illustrate, the roles of embryologists are complex and may include work not undertaken in some other settings around the world. In the clinics we observed and in other interviews, embryologists were highly valued by fertility specialists and considered crucial members of the care team for patients and regularly consulted for their expertise. Care-work enacted by embryologists in SSA includes human reproductive materials, patients, running the laboratory, the profession and data. We argue that this care-work, in concert with their technologies, is crucial to achieve the main goal of clinics in providing effective and (high) quality infertility care. Finally, we explore aspects of care-work relevant to infertility care within SSA. We describe aspects of the work of embryologists not mentioned in the previous literature, including fundraising by embryologists and their roles in establishing ‘first’ clinics, mobile work as ‘fly-in fly-out’ (FIFO) staff, combined professional backgrounds and advocacy work where there may be little or no government financial support for IVF, nor legislation or professional guidelines in place. We draw on qualitative fieldwork and interviews conducted as part of a large ethnographic study on the emerging IVF industry in SSA. The qualitative methodology fitted the exploratory aims of the broader study and enabled us to combine different means of data collection, such as semi-structured interviews (SSIs), observations and conversations. In this ethnographic study, we interviewed 117 informants (including patients, clinicians, embryologists, nurses, counsellors and donors) from January 2022 to February 2023. This included key informants from across SSA (mainly South Africa, but also Uganda, Mozambique, Namibia, Tanzania, Ethiopia, Cameroon, Zambia and Ghana) and observations during visits to three public and six private clinics in South Africa – Pretoria, Johannesburg, Mbombela and Cape Town (in September and October 2022). In this article, we draw on SSIs with 11 embryologists who work or previously worked in fertility clinics in South Africa, Namibia, Ethiopia, Uganda, Zimbabwe, Kenya and Zambia. Thirteen embryologists were approached for an interview, of whom 11 agreed, one declined, and one did not respond. The conduct of SSIs is a valid way to gain insights into people’s accounts – their views and experiences . Informants were recruited through direct approaches to fertility clinics and personal networks of the study team. As hardly anything is known about the role of embryologists in SSA and no database exists, we opted for a combination of convenience and maximum variation sampling, attempting to include embryologists working in different contexts, positions and clinics to explore their different views and experiences . We spoke with six male and five female embryologists, all working in different clinics; nine of them (had) worked in private clinics and three in public clinics; their work experience varied substantially, from around 40 years to a couple of years. The SSIs, using a SSI-guide (presented in the Appendix, see section on given at the end of the article.), lasted on average approximately one hour and were conducted in person during visits to clinics or via Zoom throughout 2022 and 2023. All participants gave signed informed consent. Participants were asked to describe their work and comment on their motivation to do this work and its challenges, describe their roles and tasks in the clinic, reflect on what they felt might improve access to ARTs in SSA and consider the future of IVF in the region. For the current article, we used insights gained about their motivation and variety in roles (see also ). All interviews except one were recorded and transcribed (in one case, when the informant declined to be tape-recorded, notes were taken manually). One interviewee asked for the interview guide before the interview took place and answered the questions in written form; this document was shared with the researchers during the interview. Interviews were thematically coded (inductively) by the two first authors and then compared across the sample to note similar and contrasting opinions. As is common practice in social science, we provided all participants with pseudonyms (rather than numbers) to emphasize their personhood. Given that the community of embryologists is very small, we have not provided further data on the background and ages of informants to protect their anonymity. Ethical clearance was granted by Monash University (MUHREC 27166), the University of the Witwatersrand (M210546) and participating clinics. All names in this article are pseudonyms. Motivations to work in embryology In all conversations, we asked embryologists what got them involved and what drives them to stay in the IVF industry. Their strong motivation and commitment stood out despite the long hours, as embryologist Anje (South Africa) expressed: You know I must be honest with you, there were many times that I really wanted to get out of it because in the beginning it’s long hours, it is irregular hours. In the days when we started out we would have aspirated in the morning and then in vitro culture the eggs and strictly 4 o’clock in the afternoon – you were not allowed to do fertilisation before 4 o’clock. So that being a Monday, a Saturday, or a Sunday, 7 days a week. That is how we used to work. So the hours were very difficult for me but then at that time it just so happened that every time that I wanted to get away or do something else my road just got deeper and deeper into this’… as much as I at times tried to get out of it my roads always lead into deeper things, more, yeah, and that’s why I’m still here. When asking our embryology informants working in SSA IVF clinics to describe what their jobs entail and what an average day looks like, many first emphasized that ‘no day is the same’, given the enormous diversity of their tasks. In attempting to describe a ‘typical day’, one embryologist in an academic training clinic in South Africa explained in a written description: Started work at 06:30 h with mail over breakfast and pre-reading intern reports, followed by evaluation of embryos progress in the embryoscope at 07:00 h; conducted morning meetings to discuss previous procedures, current embryo development and the day’s ART tasks; then undertook administration and in-person talks with interns at 08:00 h; followed by tasks related to the work program including dealing with financials/disposables/equipment/repairs at 11:00 h. At 12:00 h had to troubleshoot a lab event and problem-solve, then had lunch [during which time processed more emails]. By 02:00 h undertook some research work as well as professional association activities and database entry. Went home at 04:50 h and then at 06:00 h was involved in an African Federation Fertility Society – Webinar. For many embryologists, the variety of their work is the attraction. Octavia (South Africa), who is involved in andrology and embryology, emphasized that this is what motivated her. She described it as ‘fascinating’: That is why I say I am actually in a very nice position here because I am an embryologist by registration, I still do embryology, I do what I love, I love working with sperm. And then also, I mean, the shipments and the donor sperm and I mean – when I started doing this I never thought I would choose a donor for a patient. These multiple daily tasks and responsibilities were described as rewarding by all our informants, though also extremely challenging, given the extended hours of work each day and over weekends. Most described good relationships with the fertility specialists and other clinical staff, recognition of their importance to the workings of the clinic and autonomy in their scientific work (reinforced in our interviews with fertility specialists). Finding a good balance between clinic care-work and domestic care-work at home with family was a topic that some embryologists struggled with, especially women who often had the double burden of gendered housework and family responsibilities in addition to paid work. The combination of laboratory practice with research added to their satisfaction in working in a field in which new technologies and research questions were continuously introduced, but this competed with the attention they wanted to give to their own family: Ja, so for me it is difficult. Sometimes I get to work and I think ‘I’m done, I can’t be a mom and do this and have a husband that has a difficult job.’ But then I love the research side, and then there is just a new research question or this new thing popping up – and there are so many questions in this field! So from a research perspective, it is an amazing field to be in. (Octavia) Another embryologist underlined the importance of research, yet regretted they only had limited time for that. Some embryologists had a personal motivation for their involvement, such as seeing close relatives or friends suffer from infertility or not being able to conceive themselves. The latter was the case for embryologist Sam (Zimbabwe): ‘I mean I'm more than motivated, you know … that my child is an IVF baby and that’s why I was motivated, yeah; so I mean I couldn’t get any bigger motivation’. Octavia’s own experience of motherhood increased her motivation to continue working in the field: ‘And then I had my own child and for me it changed there … I realised this is what people want and this is why they are there’. For Billy, a family connection to the IVF industry in Uganda inspired him to gain an advanced degree qualification in embryology. In addition, a number of embryologists undertake various forms of advocacy work, such as with government policies and institutions to improve funding for infertility treatment, to ease barriers to the importation of equipment and medication, or to improve access for patients. Embryologists were highly motivated because of their pioneering role in introducing fertility care in their country, as embryologist Erik (Ethiopia) explained: ‘The government, they didn’t give it any attention, the health professionals didn’t know about it too’. He noted the social stigma experienced by infertile people in Ethiopia, especially women, who he said had little recourse to biomedical treatment; here, polygamy, witchcraft or holy water were used to overcome infertility, and ‘women becoming nuns in convents and divorce’. In addition to working in embryology, Erik had become a fundraiser for a public infertility unit and saw himself as an advocate whose mission was ‘opening the eyes’ of health professionals and policymakers to the burden of infertility. Sam was the only embryologist who mentioned that his involvement in embryology was partly financially motivated, although the profession attracts a relatively high salary, especially in the private sector, this also makes retention of embryologists in the public sector difficult. Erik, for example, referred to the different salary levels for expert IVF staff in the public sector in Ethiopia: ‘I think gynecologists were paid, like US $2000, and the embryologists, it’s like, not more than US $500 in a month, which is big money, actually’. Some embryologists intimated that they had moved into the private sector because of better conditions, pay and experience, contributing to shortages of embryologists in the public sector of SSA countries. Caring for reproductive materials The primary role of embryologists, recognized in the laboratory, is the responsibility for human reproductive materials. As noted above, the sense of care derives from the clinical work – the work of making a baby – and the work in preserving the materials of potential human beings through handling, testing, vitrifying, transporting and thawing with care. There is enormous responsibility invested in the embryologist; at any point, they can succeed or fail through technical mishap, neglect or carelessness. The laboratory work must be precise, documented and double-checked, all under time pressures. There is great emphasis on the efficient use of laboratory materials and time due to the demand for cost-efficiency and specific biological chronology – time periods required for fertilization, embryo development and transfers. Several embryologists emphasized that they handle human embryos, which are – according to one interviewee – ‘not objects’. One argued that embryologists need to take care not to become disassociated from the embryo and to be aware of the special status of the embryos in the work they do. She illustrated this by recalling an event early in her career, when she had grown several embryos for one patient, and the clinician-in-charge had asked her to throw away three of those. She bluntly refused to do so – they were ‘perfectly good embryos’. While recognizing the preciousness of the materials was common among embryologists, one strongly distinguished between the preciousness of different materials involved in IVF. When talking about shipping materials and the risks involved, Octavia differentiated between sperm, eggs and embryos: So what we used to do a few years back, the clinics give patients a flask, a thermos flask, and you fill it with [a medium] and you put your sperm or eggs in there and you travel it up and down. So with sperm I am fine with patients to do that, but now with eggs and embryos it is starting to get a bit risky. So [a shipping company] is close by and I always tell patients to contact them and let them bring their shipper and we pack the shipper or they pack the shipper – it is at an additional cost, I know, but at least we know it is safe; the shipper is upright. And I mean sperm is one thing, but if your embryo, that’s your last embryo and now you are walking around with it in a flask! The technologies themselves figure in this care, as the ‘flask’ is not considered ‘safe enough’ for oocytes. Having appropriate, up-to-date, clean equipment, materials and space is paramount, and it is with pride that embryologists displayed to us their newest equipment, impeccable systems of record-keeping, effective systems for identifying material, checklists and workspaces. The technologies are both symbolically and pragmatically extensions of embryology care – they are the exclusive domain of the embryologist and the means through which material is tested, counted, fertilized and stored and through which vigilance and protection are enacted. In recognition of the preciousness of the materials they are working with, some embryologists also referred to their dependence on higher powers, beyond technology, to be successful . Praying at crucial moments, such as trying to find a healthy spermatozoon in a testicular biopsy or during ICSI fertilization, can be considered a practice of care undertaken in hope to increase the chance of success. For Sam, treatment failures were the most difficult of all: ‘Especially in the first year or two, you know it was really difficult when you failed, either you failed to fertilize the eggs or the embryos end – virtually no pregnancy’. Although he now feels he is experienced enough that he is capable of resolving most situations that confront him, he continues to call on God and says His support is still ‘dearly needed’. Enacting care with patients Embryologists may be thought of as technicians working in laboratory settings – dressed in white coats, wearing hair caps and gloves for hygienic purposes, distanced from the people they are working with and whose gametes they are handling with care. This does not reflect the situation of the embryologists we spoke with, who were all involved in emotional labour as part of their jobs, which was also observed by . All of those interviewed directly interacted with and cared for their patients outside the laboratory, and this seemed to be an essential and rewarding part of their job. These interactions differed depending on the kind and size of the clinic(s) in which they worked, their particular professional background, including training additional to embryology, and the position they held in the clinic. All were involved in informing and communicating with patients, such as explaining the procedures involved in IVF and the results of various treatment steps (for example, the number of ova retrieved or embryos fertilized). One embryologist (Anje, South Africa), also trained as a psychologist, underlined the importance of providing this information as a way for their patients to gain familiarity and a sense of control: I think, you know I try to just, I try to involve them as much as I can so that in the end they will realise that I cannot guarantee them a baby, but I can guarantee them that I will walk the road with them. And I think having the first interview, … I have about half and an hour interview with them explaining to them what we’re going to do, how they can expect to feel, what they can expect in terms of feedback, when they should be coming back, what we’re going to do with the embryo transfer, what will happen to their remaining embryos and in my way I try to familiarise them with very unfamiliar circumstances, and also try to at least put them in control in a situation where they don’t have control over anything … In such work, embryologists navigated the different backgrounds and knowledge bases of patients. Anje, for example, had put efforts into learning the basics of Portuguese to enable her to communicate directly with patients coming from Mozambique. Billy explained that at their clinic in Uganda, staff adapted their explanations of complex fertility issues to ensure comprehension: The patients first of all, I mean it’s varied. You have the highly educated ones who come to you after they have done all their research on the internet or whatever and then you have those who have no idea what they are even doing. So our way was really to break it down to them at their level. You know I explained the concept of a seed and the soil, why does the seed germinate and others don’t germinate… This is what you are going to go into, this is what you should expect and these are the success rates. If you are not successful we can do this again. These are your options. So we used to have very good dialogues and we would discuss options, you know. Some of the interviewed embryologists were responsible for sharing bad news, such as the failure of fertilization or poor-quality embryos. Anje compared support practices in universities in the early days of IVF – when social work and psychologists were involved in the IVF clinic – with more contemporary practices in private clinics, where things are ‘much speeded up’, with less time for counselling. Sometimes, negative results were left to secretarial staff to convey over the phone. She felt communication by the embryologist was one means to better support people: (The patients) become so anxious as to (say things like) ‘yesterday you said I had nine eggs, now today you say only five have been fertilized, now tomorrow only three are developing, what is happening? Will I – you know we can’t do anything about the stress that these people are under, or we can’t take it away. It’s part of the whole thing, but you can definitely limit the period that they have to cope with it on a daily basis but by at least talking to them, explaining to them what the real situation is. In clinics offering donor material and surrogacy, some embryologists were involved in educating patients with little knowledge of these practices, as Billy explained: If somebody really was post-menopausal, you know there was no point in wasting time selling them what you don’t have (IVF with her own eggs), but we freely talked about the concept of egg donation, egg sharing, surrogacy, but breaking it in a way that they could digest. For instance, somebody would say ‘Hey, but if another woman carries my baby then that’s not my baby’, and then we explain the genetics but at the level that they understand. A few of the embryologists we interviewed were also involved in donor selection, leading to extended interactions with patients. For example, Octavia was responsible for finding appropriate sperm donors (from an external donor bank), which she then presented as potential candidates to intended parents. In her experience, some intended parents were able to choose straight away; others continued to ponder about who would be the best donor, with lengthy conversations with Octavia: It is a huge responsibility, but I do look at it very scientifically. I never help a patient choose a donor if they say they have no selection criteria. So you need to give me three or four selection criteria, we need to have something, so I try and approach it as scientifically as possible with as little emotional connection to it as possible. Embryologists are also heavily involved in clinic policies and ethical considerations surrounding the use of third-party material. In South Africa, sperm donation is allowed to be anonymous, but elsewhere in SSA countries where our informants worked, little or no regulation existed. This means that clinics determine the ethical considerations and conditions under which third-party material is used (cf. ). For example, in Zimbabwe, although third-party donation is currently anonymous at their clinic, embryologist Sam is concerned that in the future, direct-to-consumer DNA testing may result in donor-conceived children tracing their family background: ‘I am worried for 20 years to come or so’. For that reason, to be able to care for such requests in the future, he keeps track of donors’ names and other details. At the time of the interview, this was a handwritten file; subsequently, a digital donor record system was installed at the clinic. Providing information on the procedures around shipping donor gametes and embryos is another task of one South Africa-based embryologist, although the actual shipping is organized by companies that provide specialized IVF courier services. This also involves direct communication with patients, to explain the options and procedures. Although the clinic is not legally responsible for these courier tasks and the risks involved, such as the materials not being carried properly and therefore arriving damaged, Octavia had to have conversations with patients about this. Caring for the clinic Due to the paucity of infertility clinics across the SSA region, several embryologists were involved in work as ‘pioneers’ lobbying for funding and investment to build ‘first’ clinics (both public and private), getting them running and offering a variety of treatments (including egg and sperm donation), or expanding to other countries. We consider this as ‘caring for the clinic’. This was time-consuming work that was additional to actual laboratory work – caring for ‘precious’ materials – and caring for patients. Setting up a clinic involves several steps: budgeting; finding investors or engaging in some form of crowdfunding; finding a proper building and adapting it to fit the requirements of an IVF clinic and laboratory; recruiting and training staff; purchasing equipment and arranging permissions for its import; getting medication approved, ordered and stored; guaranteeing backup of medication; logistics to ensure adequate supplies of culture medium; and so forth. In these steps, embryologists were confronted with various hurdles and challenges. One embryologist had undertaken such work in several countries and was often called in to troubleshoot laboratories with poor success rates to try to identify and fix the problem. Convincing other people, either policymakers in the public health service sector or private investors, to support the establishment of a clinic was the first hurdle they had to take. International professional contacts – experts they met during training abroad or at international conferences – were important for this. Erik, for example, collaborated with an Ethiopian university to convince some government officials and university professors to establish a public IVF clinic in a wing of an existing hospital. In the absence of financial support from the government, he then facilitated liaison with a US university clinic, which led to support for the IVF clinic for a period of five years. To staff the clinic, three gynaecologists working in the hospital and interested in infertility were recruited and sent to Taiwan for short IVF training courses and to Egypt for on-the-job training; embryologists were sent to India for a six-week course. Erik then assisted with getting approval for medication and culture media, all newly introduced products in Ethiopia, which had to be approved by the Ethiopian Drug Administration. The bureaucratic hurdles in getting approvals were manifold; at the time of the interview (October 2022), they were still in process. The public IVF clinic started functioning in 2021, more than two years after Erik proposed the clinic. Meanwhile, Erik had found another investor – a private company – prepared to invest in a private IVF clinic in Tigray Province. This company uses money from private investors who want to invest in health, led by a UK citizen originally from Ethiopia who understood the problem. With this investor, Erik was able to convince the government hospital in Tigray Province to build a new storey on top of the existing women’s hospital – ‘they preferred it not to be a solo IVF clinic, because it’s like, people don’t like it, it will be like, discriminatory’. Due to hostilities in the province, this clinic was not used when this interview was conducted – ‘it’s sitting there. Everything is there, the equipment. It’s idle now’. So, while Erik spent much time in setting up IVF clinics in Ethiopia, he has returned to a third country to work as an embryologist. Other embryologists reflected on similar challenges in setting up and expanding IVF clinics in SSA. Billy, who has lived and worked for a long time in Uganda as an embryologist, well remembers the efforts it took to get IVF introduced and the system working. Over the years, he invested time and effort in organizing IVF logistics. He arranged to purchase equipment, second-hand, from a European IVF centre that was closing, and had to convince the government that this was not just ‘the West dumping their used stuff’. Some large scientific equipment suppliers did not yet have agencies/offices in SSA, and they even had to buy instruments like a small microscope in Dubai, which was the nearest agency. Billy mentions that they were quite privileged from the start, ‘despite only purchasing and importing culture media, really buying a small quantity of stock for a limited number of patients’. He noted the support he received from ‘friends from Brussels who kind of lobbied for us’, which enabled them to establish relationships when going to conferences and allowed them to buy smaller quantities: ‘And, when their numbers were increasing over time (the companies) started taking us more seriously and they could ship (larger quantities)’. Getting equipment and other products into the harbour is one thing; getting them to pass customs duties is another: When they (government officials) don’t know these kinds of things, equipment and all, they tend to classify them as they want that attracts a whole huge duty. So it took us some kind of diplomacy dealing with key stakeholders in the ministries of health, and some government officials, some of whom had been our patients, to lobby. So once those kinds of people did speak on our behalf, yeah for some countries especially Uganda we had the favour of having a lot of the duties on some of these things lifted. So that helped us. Other embryologists had similar stories of their work setting up clinics, lobbying for funds, approaching investors and negotiating with government agencies. These roles are far beyond those typically associated with embryologists but indicate the crucial roles they play in advocating for the expansion of infertility services across SSA. Transnational mobilities: care-work across borders The shortage of expertise in embryology in many countries in SSA leads to the movement of clinicians and embryologists to provide services on rotation across the region, ‘flying-in flying-out’ (FIFO) across countries – and even continents – to deliver their lab services in short periods of time, often on a monthly or bimonthly basis . This transnational mobility – of patients and staff, gametes and embryos, lab equipment, materials and medication – complicates the functioning of the clinic and laboratory and further extends the care-work of embryologists across borders. This mobile FIFO work involves travel on a regular basis to other ‘satellite’ clinics or laboratories to deliver laboratory services in countries without embryology staff. This affects the work of embryologists, leading to an increase in ‘batching’, a practice that involves the control and manipulation of women patients’ hormonal cycles so that egg retrieval, fertilization of eggs with sperm and embryo transfer can take place for a cohort of patients within a discrete time period of a few days, making efficient use of the presence of embryologists. Embryologist Billy, for example, has worked on a regular circuit traversing satellite clinics in Uganda, Tanzania and Zambia. The organization of work is influenced by the scarcity/availability of certain expertise –in particular embryologists – and the need for time, material and cost efficiencies. For the embryologist, such work is intensive. Peter, for instance, noted the intensity of his workload during periods working in a satellite clinic in Namibia and elsewhere outside South Africa when he is the only one in the laboratory, ‘so I do everything. Instead of there being two or three people helping there is only one person’. Caring for the profession Dedication to the profession was evident in our interviews, in particular the need for further training in the region and professional development opportunities for embryologists who may be quite isolated in disparate countries. Concerns about recognizing embryology as an important specialization were expressed in our interviews as well. For example, in South Africa, the country has only two full professors in embryology; there is no professional society for embryologists (though a Special Interest Group for embryologists exists in SASREG (Southern African Society of Reproductive Medicine and Gynaecological Endoscopy)); the capacity for training embryologists in clinics is limited; and legally, the term ‘embryologist’ is not defined or protected. One embryologist mentioned their involvement in training as a key source of personal satisfaction and motivation: (I) encourage independent evidence based-scientific thinking and life-competencies. So that interns carry on a philosophy of strong self-worth, develop their own capabilities, based on experiences and knowledge where to get answers if in doubt. Trainees in medical embryology are carefully selected. As one trainer noted, embryologists must be able to carefully handle the precious materials they are going to work with, and not everyone has this capacity. Our interviewees noted that approximately 15 applicants apply annually in South Africa to be trained in medical embryology, usually coming from biological science backgrounds, but of these, only three are accepted due to the limited capacity to train more. The applicants have to spend a day in a lab to watch the realities of the work involved. The embryologists and medical scientists with whom they work during that day will then score the applicant on a number of qualities, before the applicant is invited for an interview. At the interview, we were informed that their motivation for training and the work is an important topic. Once trained, most embryologists are in such demand that they are lost to public health systems and usually find work in the private sector. Several experienced embryologists in our sample had emigrated for further training opportunities and experience and also, in some cases, to permanently live and work overseas. As a result, across SSA, clinics complained about the difficulties in attracting and retaining embryologists and other medical science staff. In all conversations, we asked embryologists what got them involved and what drives them to stay in the IVF industry. Their strong motivation and commitment stood out despite the long hours, as embryologist Anje (South Africa) expressed: You know I must be honest with you, there were many times that I really wanted to get out of it because in the beginning it’s long hours, it is irregular hours. In the days when we started out we would have aspirated in the morning and then in vitro culture the eggs and strictly 4 o’clock in the afternoon – you were not allowed to do fertilisation before 4 o’clock. So that being a Monday, a Saturday, or a Sunday, 7 days a week. That is how we used to work. So the hours were very difficult for me but then at that time it just so happened that every time that I wanted to get away or do something else my road just got deeper and deeper into this’… as much as I at times tried to get out of it my roads always lead into deeper things, more, yeah, and that’s why I’m still here. When asking our embryology informants working in SSA IVF clinics to describe what their jobs entail and what an average day looks like, many first emphasized that ‘no day is the same’, given the enormous diversity of their tasks. In attempting to describe a ‘typical day’, one embryologist in an academic training clinic in South Africa explained in a written description: Started work at 06:30 h with mail over breakfast and pre-reading intern reports, followed by evaluation of embryos progress in the embryoscope at 07:00 h; conducted morning meetings to discuss previous procedures, current embryo development and the day’s ART tasks; then undertook administration and in-person talks with interns at 08:00 h; followed by tasks related to the work program including dealing with financials/disposables/equipment/repairs at 11:00 h. At 12:00 h had to troubleshoot a lab event and problem-solve, then had lunch [during which time processed more emails]. By 02:00 h undertook some research work as well as professional association activities and database entry. Went home at 04:50 h and then at 06:00 h was involved in an African Federation Fertility Society – Webinar. For many embryologists, the variety of their work is the attraction. Octavia (South Africa), who is involved in andrology and embryology, emphasized that this is what motivated her. She described it as ‘fascinating’: That is why I say I am actually in a very nice position here because I am an embryologist by registration, I still do embryology, I do what I love, I love working with sperm. And then also, I mean, the shipments and the donor sperm and I mean – when I started doing this I never thought I would choose a donor for a patient. These multiple daily tasks and responsibilities were described as rewarding by all our informants, though also extremely challenging, given the extended hours of work each day and over weekends. Most described good relationships with the fertility specialists and other clinical staff, recognition of their importance to the workings of the clinic and autonomy in their scientific work (reinforced in our interviews with fertility specialists). Finding a good balance between clinic care-work and domestic care-work at home with family was a topic that some embryologists struggled with, especially women who often had the double burden of gendered housework and family responsibilities in addition to paid work. The combination of laboratory practice with research added to their satisfaction in working in a field in which new technologies and research questions were continuously introduced, but this competed with the attention they wanted to give to their own family: Ja, so for me it is difficult. Sometimes I get to work and I think ‘I’m done, I can’t be a mom and do this and have a husband that has a difficult job.’ But then I love the research side, and then there is just a new research question or this new thing popping up – and there are so many questions in this field! So from a research perspective, it is an amazing field to be in. (Octavia) Another embryologist underlined the importance of research, yet regretted they only had limited time for that. Some embryologists had a personal motivation for their involvement, such as seeing close relatives or friends suffer from infertility or not being able to conceive themselves. The latter was the case for embryologist Sam (Zimbabwe): ‘I mean I'm more than motivated, you know … that my child is an IVF baby and that’s why I was motivated, yeah; so I mean I couldn’t get any bigger motivation’. Octavia’s own experience of motherhood increased her motivation to continue working in the field: ‘And then I had my own child and for me it changed there … I realised this is what people want and this is why they are there’. For Billy, a family connection to the IVF industry in Uganda inspired him to gain an advanced degree qualification in embryology. In addition, a number of embryologists undertake various forms of advocacy work, such as with government policies and institutions to improve funding for infertility treatment, to ease barriers to the importation of equipment and medication, or to improve access for patients. Embryologists were highly motivated because of their pioneering role in introducing fertility care in their country, as embryologist Erik (Ethiopia) explained: ‘The government, they didn’t give it any attention, the health professionals didn’t know about it too’. He noted the social stigma experienced by infertile people in Ethiopia, especially women, who he said had little recourse to biomedical treatment; here, polygamy, witchcraft or holy water were used to overcome infertility, and ‘women becoming nuns in convents and divorce’. In addition to working in embryology, Erik had become a fundraiser for a public infertility unit and saw himself as an advocate whose mission was ‘opening the eyes’ of health professionals and policymakers to the burden of infertility. Sam was the only embryologist who mentioned that his involvement in embryology was partly financially motivated, although the profession attracts a relatively high salary, especially in the private sector, this also makes retention of embryologists in the public sector difficult. Erik, for example, referred to the different salary levels for expert IVF staff in the public sector in Ethiopia: ‘I think gynecologists were paid, like US $2000, and the embryologists, it’s like, not more than US $500 in a month, which is big money, actually’. Some embryologists intimated that they had moved into the private sector because of better conditions, pay and experience, contributing to shortages of embryologists in the public sector of SSA countries. The primary role of embryologists, recognized in the laboratory, is the responsibility for human reproductive materials. As noted above, the sense of care derives from the clinical work – the work of making a baby – and the work in preserving the materials of potential human beings through handling, testing, vitrifying, transporting and thawing with care. There is enormous responsibility invested in the embryologist; at any point, they can succeed or fail through technical mishap, neglect or carelessness. The laboratory work must be precise, documented and double-checked, all under time pressures. There is great emphasis on the efficient use of laboratory materials and time due to the demand for cost-efficiency and specific biological chronology – time periods required for fertilization, embryo development and transfers. Several embryologists emphasized that they handle human embryos, which are – according to one interviewee – ‘not objects’. One argued that embryologists need to take care not to become disassociated from the embryo and to be aware of the special status of the embryos in the work they do. She illustrated this by recalling an event early in her career, when she had grown several embryos for one patient, and the clinician-in-charge had asked her to throw away three of those. She bluntly refused to do so – they were ‘perfectly good embryos’. While recognizing the preciousness of the materials was common among embryologists, one strongly distinguished between the preciousness of different materials involved in IVF. When talking about shipping materials and the risks involved, Octavia differentiated between sperm, eggs and embryos: So what we used to do a few years back, the clinics give patients a flask, a thermos flask, and you fill it with [a medium] and you put your sperm or eggs in there and you travel it up and down. So with sperm I am fine with patients to do that, but now with eggs and embryos it is starting to get a bit risky. So [a shipping company] is close by and I always tell patients to contact them and let them bring their shipper and we pack the shipper or they pack the shipper – it is at an additional cost, I know, but at least we know it is safe; the shipper is upright. And I mean sperm is one thing, but if your embryo, that’s your last embryo and now you are walking around with it in a flask! The technologies themselves figure in this care, as the ‘flask’ is not considered ‘safe enough’ for oocytes. Having appropriate, up-to-date, clean equipment, materials and space is paramount, and it is with pride that embryologists displayed to us their newest equipment, impeccable systems of record-keeping, effective systems for identifying material, checklists and workspaces. The technologies are both symbolically and pragmatically extensions of embryology care – they are the exclusive domain of the embryologist and the means through which material is tested, counted, fertilized and stored and through which vigilance and protection are enacted. In recognition of the preciousness of the materials they are working with, some embryologists also referred to their dependence on higher powers, beyond technology, to be successful . Praying at crucial moments, such as trying to find a healthy spermatozoon in a testicular biopsy or during ICSI fertilization, can be considered a practice of care undertaken in hope to increase the chance of success. For Sam, treatment failures were the most difficult of all: ‘Especially in the first year or two, you know it was really difficult when you failed, either you failed to fertilize the eggs or the embryos end – virtually no pregnancy’. Although he now feels he is experienced enough that he is capable of resolving most situations that confront him, he continues to call on God and says His support is still ‘dearly needed’. Embryologists may be thought of as technicians working in laboratory settings – dressed in white coats, wearing hair caps and gloves for hygienic purposes, distanced from the people they are working with and whose gametes they are handling with care. This does not reflect the situation of the embryologists we spoke with, who were all involved in emotional labour as part of their jobs, which was also observed by . All of those interviewed directly interacted with and cared for their patients outside the laboratory, and this seemed to be an essential and rewarding part of their job. These interactions differed depending on the kind and size of the clinic(s) in which they worked, their particular professional background, including training additional to embryology, and the position they held in the clinic. All were involved in informing and communicating with patients, such as explaining the procedures involved in IVF and the results of various treatment steps (for example, the number of ova retrieved or embryos fertilized). One embryologist (Anje, South Africa), also trained as a psychologist, underlined the importance of providing this information as a way for their patients to gain familiarity and a sense of control: I think, you know I try to just, I try to involve them as much as I can so that in the end they will realise that I cannot guarantee them a baby, but I can guarantee them that I will walk the road with them. And I think having the first interview, … I have about half and an hour interview with them explaining to them what we’re going to do, how they can expect to feel, what they can expect in terms of feedback, when they should be coming back, what we’re going to do with the embryo transfer, what will happen to their remaining embryos and in my way I try to familiarise them with very unfamiliar circumstances, and also try to at least put them in control in a situation where they don’t have control over anything … In such work, embryologists navigated the different backgrounds and knowledge bases of patients. Anje, for example, had put efforts into learning the basics of Portuguese to enable her to communicate directly with patients coming from Mozambique. Billy explained that at their clinic in Uganda, staff adapted their explanations of complex fertility issues to ensure comprehension: The patients first of all, I mean it’s varied. You have the highly educated ones who come to you after they have done all their research on the internet or whatever and then you have those who have no idea what they are even doing. So our way was really to break it down to them at their level. You know I explained the concept of a seed and the soil, why does the seed germinate and others don’t germinate… This is what you are going to go into, this is what you should expect and these are the success rates. If you are not successful we can do this again. These are your options. So we used to have very good dialogues and we would discuss options, you know. Some of the interviewed embryologists were responsible for sharing bad news, such as the failure of fertilization or poor-quality embryos. Anje compared support practices in universities in the early days of IVF – when social work and psychologists were involved in the IVF clinic – with more contemporary practices in private clinics, where things are ‘much speeded up’, with less time for counselling. Sometimes, negative results were left to secretarial staff to convey over the phone. She felt communication by the embryologist was one means to better support people: (The patients) become so anxious as to (say things like) ‘yesterday you said I had nine eggs, now today you say only five have been fertilized, now tomorrow only three are developing, what is happening? Will I – you know we can’t do anything about the stress that these people are under, or we can’t take it away. It’s part of the whole thing, but you can definitely limit the period that they have to cope with it on a daily basis but by at least talking to them, explaining to them what the real situation is. In clinics offering donor material and surrogacy, some embryologists were involved in educating patients with little knowledge of these practices, as Billy explained: If somebody really was post-menopausal, you know there was no point in wasting time selling them what you don’t have (IVF with her own eggs), but we freely talked about the concept of egg donation, egg sharing, surrogacy, but breaking it in a way that they could digest. For instance, somebody would say ‘Hey, but if another woman carries my baby then that’s not my baby’, and then we explain the genetics but at the level that they understand. A few of the embryologists we interviewed were also involved in donor selection, leading to extended interactions with patients. For example, Octavia was responsible for finding appropriate sperm donors (from an external donor bank), which she then presented as potential candidates to intended parents. In her experience, some intended parents were able to choose straight away; others continued to ponder about who would be the best donor, with lengthy conversations with Octavia: It is a huge responsibility, but I do look at it very scientifically. I never help a patient choose a donor if they say they have no selection criteria. So you need to give me three or four selection criteria, we need to have something, so I try and approach it as scientifically as possible with as little emotional connection to it as possible. Embryologists are also heavily involved in clinic policies and ethical considerations surrounding the use of third-party material. In South Africa, sperm donation is allowed to be anonymous, but elsewhere in SSA countries where our informants worked, little or no regulation existed. This means that clinics determine the ethical considerations and conditions under which third-party material is used (cf. ). For example, in Zimbabwe, although third-party donation is currently anonymous at their clinic, embryologist Sam is concerned that in the future, direct-to-consumer DNA testing may result in donor-conceived children tracing their family background: ‘I am worried for 20 years to come or so’. For that reason, to be able to care for such requests in the future, he keeps track of donors’ names and other details. At the time of the interview, this was a handwritten file; subsequently, a digital donor record system was installed at the clinic. Providing information on the procedures around shipping donor gametes and embryos is another task of one South Africa-based embryologist, although the actual shipping is organized by companies that provide specialized IVF courier services. This also involves direct communication with patients, to explain the options and procedures. Although the clinic is not legally responsible for these courier tasks and the risks involved, such as the materials not being carried properly and therefore arriving damaged, Octavia had to have conversations with patients about this. Due to the paucity of infertility clinics across the SSA region, several embryologists were involved in work as ‘pioneers’ lobbying for funding and investment to build ‘first’ clinics (both public and private), getting them running and offering a variety of treatments (including egg and sperm donation), or expanding to other countries. We consider this as ‘caring for the clinic’. This was time-consuming work that was additional to actual laboratory work – caring for ‘precious’ materials – and caring for patients. Setting up a clinic involves several steps: budgeting; finding investors or engaging in some form of crowdfunding; finding a proper building and adapting it to fit the requirements of an IVF clinic and laboratory; recruiting and training staff; purchasing equipment and arranging permissions for its import; getting medication approved, ordered and stored; guaranteeing backup of medication; logistics to ensure adequate supplies of culture medium; and so forth. In these steps, embryologists were confronted with various hurdles and challenges. One embryologist had undertaken such work in several countries and was often called in to troubleshoot laboratories with poor success rates to try to identify and fix the problem. Convincing other people, either policymakers in the public health service sector or private investors, to support the establishment of a clinic was the first hurdle they had to take. International professional contacts – experts they met during training abroad or at international conferences – were important for this. Erik, for example, collaborated with an Ethiopian university to convince some government officials and university professors to establish a public IVF clinic in a wing of an existing hospital. In the absence of financial support from the government, he then facilitated liaison with a US university clinic, which led to support for the IVF clinic for a period of five years. To staff the clinic, three gynaecologists working in the hospital and interested in infertility were recruited and sent to Taiwan for short IVF training courses and to Egypt for on-the-job training; embryologists were sent to India for a six-week course. Erik then assisted with getting approval for medication and culture media, all newly introduced products in Ethiopia, which had to be approved by the Ethiopian Drug Administration. The bureaucratic hurdles in getting approvals were manifold; at the time of the interview (October 2022), they were still in process. The public IVF clinic started functioning in 2021, more than two years after Erik proposed the clinic. Meanwhile, Erik had found another investor – a private company – prepared to invest in a private IVF clinic in Tigray Province. This company uses money from private investors who want to invest in health, led by a UK citizen originally from Ethiopia who understood the problem. With this investor, Erik was able to convince the government hospital in Tigray Province to build a new storey on top of the existing women’s hospital – ‘they preferred it not to be a solo IVF clinic, because it’s like, people don’t like it, it will be like, discriminatory’. Due to hostilities in the province, this clinic was not used when this interview was conducted – ‘it’s sitting there. Everything is there, the equipment. It’s idle now’. So, while Erik spent much time in setting up IVF clinics in Ethiopia, he has returned to a third country to work as an embryologist. Other embryologists reflected on similar challenges in setting up and expanding IVF clinics in SSA. Billy, who has lived and worked for a long time in Uganda as an embryologist, well remembers the efforts it took to get IVF introduced and the system working. Over the years, he invested time and effort in organizing IVF logistics. He arranged to purchase equipment, second-hand, from a European IVF centre that was closing, and had to convince the government that this was not just ‘the West dumping their used stuff’. Some large scientific equipment suppliers did not yet have agencies/offices in SSA, and they even had to buy instruments like a small microscope in Dubai, which was the nearest agency. Billy mentions that they were quite privileged from the start, ‘despite only purchasing and importing culture media, really buying a small quantity of stock for a limited number of patients’. He noted the support he received from ‘friends from Brussels who kind of lobbied for us’, which enabled them to establish relationships when going to conferences and allowed them to buy smaller quantities: ‘And, when their numbers were increasing over time (the companies) started taking us more seriously and they could ship (larger quantities)’. Getting equipment and other products into the harbour is one thing; getting them to pass customs duties is another: When they (government officials) don’t know these kinds of things, equipment and all, they tend to classify them as they want that attracts a whole huge duty. So it took us some kind of diplomacy dealing with key stakeholders in the ministries of health, and some government officials, some of whom had been our patients, to lobby. So once those kinds of people did speak on our behalf, yeah for some countries especially Uganda we had the favour of having a lot of the duties on some of these things lifted. So that helped us. Other embryologists had similar stories of their work setting up clinics, lobbying for funds, approaching investors and negotiating with government agencies. These roles are far beyond those typically associated with embryologists but indicate the crucial roles they play in advocating for the expansion of infertility services across SSA. The shortage of expertise in embryology in many countries in SSA leads to the movement of clinicians and embryologists to provide services on rotation across the region, ‘flying-in flying-out’ (FIFO) across countries – and even continents – to deliver their lab services in short periods of time, often on a monthly or bimonthly basis . This transnational mobility – of patients and staff, gametes and embryos, lab equipment, materials and medication – complicates the functioning of the clinic and laboratory and further extends the care-work of embryologists across borders. This mobile FIFO work involves travel on a regular basis to other ‘satellite’ clinics or laboratories to deliver laboratory services in countries without embryology staff. This affects the work of embryologists, leading to an increase in ‘batching’, a practice that involves the control and manipulation of women patients’ hormonal cycles so that egg retrieval, fertilization of eggs with sperm and embryo transfer can take place for a cohort of patients within a discrete time period of a few days, making efficient use of the presence of embryologists. Embryologist Billy, for example, has worked on a regular circuit traversing satellite clinics in Uganda, Tanzania and Zambia. The organization of work is influenced by the scarcity/availability of certain expertise –in particular embryologists – and the need for time, material and cost efficiencies. For the embryologist, such work is intensive. Peter, for instance, noted the intensity of his workload during periods working in a satellite clinic in Namibia and elsewhere outside South Africa when he is the only one in the laboratory, ‘so I do everything. Instead of there being two or three people helping there is only one person’. Dedication to the profession was evident in our interviews, in particular the need for further training in the region and professional development opportunities for embryologists who may be quite isolated in disparate countries. Concerns about recognizing embryology as an important specialization were expressed in our interviews as well. For example, in South Africa, the country has only two full professors in embryology; there is no professional society for embryologists (though a Special Interest Group for embryologists exists in SASREG (Southern African Society of Reproductive Medicine and Gynaecological Endoscopy)); the capacity for training embryologists in clinics is limited; and legally, the term ‘embryologist’ is not defined or protected. One embryologist mentioned their involvement in training as a key source of personal satisfaction and motivation: (I) encourage independent evidence based-scientific thinking and life-competencies. So that interns carry on a philosophy of strong self-worth, develop their own capabilities, based on experiences and knowledge where to get answers if in doubt. Trainees in medical embryology are carefully selected. As one trainer noted, embryologists must be able to carefully handle the precious materials they are going to work with, and not everyone has this capacity. Our interviewees noted that approximately 15 applicants apply annually in South Africa to be trained in medical embryology, usually coming from biological science backgrounds, but of these, only three are accepted due to the limited capacity to train more. The applicants have to spend a day in a lab to watch the realities of the work involved. The embryologists and medical scientists with whom they work during that day will then score the applicant on a number of qualities, before the applicant is invited for an interview. At the interview, we were informed that their motivation for training and the work is an important topic. Once trained, most embryologists are in such demand that they are lost to public health systems and usually find work in the private sector. Several experienced embryologists in our sample had emigrated for further training opportunities and experience and also, in some cases, to permanently live and work overseas. As a result, across SSA, clinics complained about the difficulties in attracting and retaining embryologists and other medical science staff. While working in the IVF laboratory – performing laboratory technical tasks – may be thought of as the embryologists’ primary task, in our study, all embryologists combined various forms of work beyond what is usually considered their conventional ‘role’. This is partly due to the context in which they work. Our exploration of the work of embryologists highlights the importance of context in shaping their practices, interactions and expectations. The shortage of embryologists, the lack of ‘corporate’ multi-centre IVF clinics in South Africa and the region (as may be the case in the US), the paucity or lack of trained counsellors in clinics, the mobilities in IVF staff and patients characteristic in the region and the need to set up ‘first’ clinics in many countries all mean that embryologists’ work extends beyond the technical. Within SSA, their roles often involve tasks beyond what might be expected of an embryologist in a laboratory in the US or Europe. The shortage of embryologists, other clinical staff and counsellors affects practices in SSA clinics, and accordingly, embryologists we interviewed undertook entrepreneurial tasks, advocacy, training, development of regulations and mentoring and patient counselling, on top of laboratory work. Clearly, this varied with the size of the clinic and its stage of development (for example, fundraising was only done by embryologists initiating a clinic). This combination of tasks makes for a dynamic and fulfilling career for those we interviewed but also stretches their capacities. It raises the question of whether their deployment across this range of tasks contributes to the scarcity of embryologists in SSA. We conceive of the work of embryologists as forms of care-work and suggest that care is enacted (and experienced) in IVF clinics through the sum of tasks, technologies, patients and other staff, which together enact care. This not only suggests the importance of care as a fundamental outcome of the work of all staff and technologies but also suggests the importance of the context, expectations and reception of care. This is a different approach to the traditional view of care in IVF clinics, which tends to view it as part of a job description of a particular staff member and assumes that quality care follows their actions alone. Our approach breaks down divisions between ‘technical’ and ‘clinical’ staff and recognizes the various ways in which care is enacted: towards gametes and embryos, clinics and technologies, the profession, patients and, in SSA, the broader goals of providing access to infertility treatment to patients who need IVF. The embryologists we interviewed were all involved in various forms of emotional labour and care with patients; they took pride in this and saw this as part of ensuring quality patient care . We were initially surprised by this, and this also contrasted with the experience of one embryologist who had worked in the US, where they had no contact with patients. Embryologists we interviewed saw themselves not only as technically adroit but also as responsible for creating families. They found that patient contact motivated their careful handling of the ‘precious’ human reproductive materials with which they worked. However, some of the interviewed embryologists are undertaking tasks, such as counselling or donor selection, for which they are not necessarily trained (although it should be mentioned that one of the interviewees combined specializations – in embryology and psychology – which justified this combination of roles). IVF clinics are strongly recommended to follow internationally accepted guidelines for IVF counselling and the use of donor material and donor selection, as provided by ESHRE and other professional organizations, which include the training of specialists in these fields ( https://www.eshre.eu/Guidelines-and-Legal ). In the Global North, the changing work of embryologists is a subject under discussion. This has been prompted by the advent of automated AI and microfluidics, which will change the technical roles of the embryologist away from manual manipulation and towards more data capture, management and analysis . However, in other ways, our study suggests that the caring role of embryologists with the advent of new technologies may be increased, requiring vigilance over AI decisions and increased need for informed communication with patients. Recognition of the deep engagement of embryologists in enacting care and contributing to successful IVF in their clinics is essential. In Global South countries such as those in SSA, the context in which embryology is practised poses differing challenges. Given the shortage of embryologists in SSA, their deployment across a range of tasks contributes to the scarcity of embryological work. In SSA countries, access to affordable and effective IVF is required, and there is a pressing need to train more embryologists to cater to the growing need for and use of medically assisted reproductive technologies. Furthermore, models and technologies of low-cost IVF all require the human resources of trained embryologists to ensure quality care and efficacy. If access to IVF is to be achieved in the region, then more embryologists need to be trained and retained. A major limitation of this study is that only 11 embryologists who are or have been working in SSA have been interviewed, not covering all SSA countries where IVF clinics exist. However, as this study/article is intended to explore the variety of embryologists’ roles and the various forms of enactment of care – and not intending to make generalizations and/or judgements about the functioning of the embryologists in these clinics – this is not considered to be a major problem. The authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the work reported. This work was supported by the Australian government through an Australian Research Council https://doi.org/10.13039/501100000923 Discovery Project Grant (DP 200101270). TG and AW conceived the study, conducted the interviews, analysed and interpreted the data and authored the article. LM analysed and interpreted the data and co-authored and edited the article. |
Targeted whole-viral genome sequencing from formalin-fixed paraffin-embedded neuropathology specimens | cacf4ab8-8cab-4a36-a5b9-ca939083d076 | 11464609 | Pathology[mh] | Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 29 KB) Supplementary file2 (PPTX 427 KB) Supplementary file3 (XLSX 100 KB) |
Three-dimensional analysis using a dental model scanner: Morphological changes of occlusal appliances used for sleep bruxism under dry and wet conditions | 3c7337b3-caed-42e4-b40a-d00d2ad85777 | 11793758 | Dentistry[mh] | Tooth grinding and jaw clenching during sleep, commonly referred to as sleep bruxism (SB), are risk factors for various pathological conditions such as tooth wear, breakage, or removal of prostheses , exacerbation of periodontal disease, and temporomandibular joint (TMJ) disorders . In SB, forces greater than the maximum voluntary bite force may be exerted , and the effect of this excessive bite force on oral and maxillary functions is the cause of the abovementioned symptoms. SB is often managed using a removable intraoral appliance known as a stabilizing-type occlusal appliance (OcA) that covers the occlusal surfaces of the dentition. OcA prevents direct loading, affects teeth, and distributes excessive occlusal forces by providing occlusal contact throughout the dentition. It also reduces the burden on the TMJ and dentition and protects the stomatognathic system from damage, such as a reduction in occlusal height diameter caused by pathological tooth attrition and TMJ disorders caused by the resulting burden on the TMJ . There are also reports that SB is suppressed for only two weeks using OcA . OcAs are often fabricated using poly (methyl methacrylate PMMA). After polymerization, PMMA undergoes dimensional changes due to expansion due to water absorption and shrinkage during drying . Sweeney reported that because PMMA denture bases are affected by water absorption, the occlusal adjustment should be delayed until the PMMA is saturated with water. Therefore, to prevent their deformation, intraoral devices containing PMMA must be stored underwater. Bohnenkamp placed die pins on OcAs fabricated with PMMA, measured the pin-to-pin distance, and reported that storage under dry conditions for two weeks resulted in significantly shorter distances than storage under wet conditions. Lim and Lee reported the three-dimensional (3D) deformation of complete dentures made of PMMA under wet and dry conditions. They reported that storage under dry conditions resulted in greater deformation than wet conditions. However, they only compared the distance between two points of the 3D data using a best-fit alignment. This alignment process repeatedly matches the nearest points of the corresponding images in a virtual space on a program. This 3D analysis method enabled the measurement of morphological differences in shapes. They did not examine the deformation area or volume. In their study, the “surface deviation” was defined as the distance between two points of 3D data from this best-fit alignment. In addition, because denture bases and artificial teeth are made of different materials, it is not clear how they deform in three dimensions in an intraoral device fabricated from a single material, such as OcA. Some studies have evaluated SB based on three-dimensional measurements of the occlusal wear of OcA. However, the three-dimensional morphological deformation of OcA due to different storage methods has not been examined. Therefore, this study provides basic data for evaluating the three-dimensional morphological deformation of OcA. In recent years, non-contact 3D scanners have been increasingly used. Compared with contact 3D scanners, which are conventionally considered highly accurate, non-contact 3D scanners are widely used in the dental field because of their simplicity, quick measurement time, and non-inferior accuracy . This study used a dental model scanner to evaluate OcA deformation under dry and wet storage conditions. Materials Impressions were made from the upper and lower parts of a dental model (E1-500A-U/500A-L; NISSIN, Kyoto, Japan), as shown in , using ready-made trays (Human Tray; Maruichi, Tokushima, Japan) and alginate impression materials (ALGINoplast: Kulzer, Hanau, Germany). Plaster models were made using a hard plaster (Zostone dental hard plaster; Shimomura Gypsum Co., Saitama, Japan). The plaster model was mounted on a Gysi Simplex OU-II articulator, and the incisor guidance needle was adjusted to elevate 1.5 mm on the first molar. As grinding at the SB was performed over the canine edge and/or molar cusp , the incisal and buccal cusps were set horizontally 1 mm beyond the cusp, the palatal thickness of the OcA was set at 2 mm, and its length was 10 mm above the tooth neck . The wax mold prepared on the working model was embedded in a metal flask, flowing wax, and then a heat-polymerized resin (Acron: GC) was used at a mix ratio of 0.43 mL/g, as per the manufacturer’s instructions. The flask was test-pressure-deburred thrice at approximately 40 kgf/cm 3 and then heat-polymerized in hot water at 70°C for 8 h, followed by slow cooling at room temperature. OcA was given occlusal contacts in all dentition rows and group functions in canines and premolars and polished. Ceramic spheres (ZrO 2 spheres; Sato Iron Works, Osaka, Japan; JIS standard S28 grade, 2 mm in diameter) were embedded in the equivalent area of the 63┴36 centrifugal corners of the OcA . After polishing, OcA was stored in water for one month to reduce the effects of water-absorption-induced expansion and polymerization shrinkage of PMMA and to hydrate it. After one month of storage in water, the product was stored under wet and dry conditions for four weeks. Water was placed in a zippered polyethylene bag for wet storage and stored in a dedicated storage case with the bag sealed. The samples were stored in a special case to prevent contact with water for dry storage. Both storage conditions were maintained at room temperature (22.0–24.0°C). Analysis method OcA 3D measurements were performed using a noncontact 3D scanner (Identica, MEDIT, Seoul, Korea). The OcA surface was sprayed with an optical impression aid for dental laboratory use (Angel Scan Spray; DENTACO, Essen, Germany) and mounted on a jig for photography before measurements were taken. The accuracy tests were conducted in advance using Identica. As an accuracy test, four measurements were performed using the Identica for each of three transparent-colored acrylic resin spheres with radii of 2.5/8 inches (7,940 μm, Sato Tekko, Toyama, Japan) mounted on a plaster base as shown in (N = 12). From the obtained stereolithography (STL) data, 100 points were randomly selected from the sphere surface, and the sphere radius was calculated 1,000 times using the least-squares method to calculate the mean value (accuracy) and standard deviation (precision) of each sphere . A difference of 10.7 μm between the true and measured values of the sphere radius and a standard deviation of 27.9 μm was observed. The wet-conditioned storage group is referred to as W and the dry-conditioned storage group is denoted as D. As shown in , OcA was measured on day 0 after one month of hydration (W0) after 4 weeks of storage under wet conditions (day 28: W4, D0), and after 4 weeks of storage under dry conditions (day 56: D4). To avoid inter-rater errors, OcA measurements were performed by a single researcher. The 3D data analysis uses a best-fit alignment, which is a process that repeatedly matches the nearest points of the corresponding images in a virtual space on a program and calculates the difference between the two images. Using the 3D measurement software, ZEISS Inspect (ZEISS, Oberkochen, Germany), the best-fit alignment of W0 and W4 (day 28) was performed under wet conditions using the STL data for W0 (day 0) as the reference. For the dry conditions, the STL data for D0 (day 28) were used as the reference, and a best-fit alignment of D0 and D4 (day 56) was used to compare the wet and dry conditions. The same test samples were used for comparison. The surface deviation (mm) was calculated from a best-fit alignment of the STL data for the entire OcA using ZEISS Inspect, with the quantities above the base data in the + direction and those below in the—direction. Since the accuracy test results showed that the error from the true value was within 40 μm, ± 40 μm was used as the cutoff value to calculate the deformation in the—direction below -40 μm and in the + direction above 40 μm. The area was calculated from the corresponding coordinate plane to the surface deviation in the ± direction considered as the amount of deformation, and this was considered as the surface area of deformation (mm 2 ). The volume was calculated from the surface area and surface deviation and was used as the deformed volume. From these data, the maximum amount of deviation in the ± direction and the deformed area and volume in the ± direction were used as outcomes. Statistical analysis EZR version 1.68 (Jichi Medical University Saitama Medical Center, Saitama, Japan) was used for the statistical analysis. The maximum deviation in the ± direction, the deformed area in the ± direction, and the volume of the wet condition group (W group) and the dry condition group (D group) were compared using Wilcoxon’s signed rank test. The significance level was set at P < 0.05. Impressions were made from the upper and lower parts of a dental model (E1-500A-U/500A-L; NISSIN, Kyoto, Japan), as shown in , using ready-made trays (Human Tray; Maruichi, Tokushima, Japan) and alginate impression materials (ALGINoplast: Kulzer, Hanau, Germany). Plaster models were made using a hard plaster (Zostone dental hard plaster; Shimomura Gypsum Co., Saitama, Japan). The plaster model was mounted on a Gysi Simplex OU-II articulator, and the incisor guidance needle was adjusted to elevate 1.5 mm on the first molar. As grinding at the SB was performed over the canine edge and/or molar cusp , the incisal and buccal cusps were set horizontally 1 mm beyond the cusp, the palatal thickness of the OcA was set at 2 mm, and its length was 10 mm above the tooth neck . The wax mold prepared on the working model was embedded in a metal flask, flowing wax, and then a heat-polymerized resin (Acron: GC) was used at a mix ratio of 0.43 mL/g, as per the manufacturer’s instructions. The flask was test-pressure-deburred thrice at approximately 40 kgf/cm 3 and then heat-polymerized in hot water at 70°C for 8 h, followed by slow cooling at room temperature. OcA was given occlusal contacts in all dentition rows and group functions in canines and premolars and polished. Ceramic spheres (ZrO 2 spheres; Sato Iron Works, Osaka, Japan; JIS standard S28 grade, 2 mm in diameter) were embedded in the equivalent area of the 63┴36 centrifugal corners of the OcA . After polishing, OcA was stored in water for one month to reduce the effects of water-absorption-induced expansion and polymerization shrinkage of PMMA and to hydrate it. After one month of storage in water, the product was stored under wet and dry conditions for four weeks. Water was placed in a zippered polyethylene bag for wet storage and stored in a dedicated storage case with the bag sealed. The samples were stored in a special case to prevent contact with water for dry storage. Both storage conditions were maintained at room temperature (22.0–24.0°C). OcA 3D measurements were performed using a noncontact 3D scanner (Identica, MEDIT, Seoul, Korea). The OcA surface was sprayed with an optical impression aid for dental laboratory use (Angel Scan Spray; DENTACO, Essen, Germany) and mounted on a jig for photography before measurements were taken. The accuracy tests were conducted in advance using Identica. As an accuracy test, four measurements were performed using the Identica for each of three transparent-colored acrylic resin spheres with radii of 2.5/8 inches (7,940 μm, Sato Tekko, Toyama, Japan) mounted on a plaster base as shown in (N = 12). From the obtained stereolithography (STL) data, 100 points were randomly selected from the sphere surface, and the sphere radius was calculated 1,000 times using the least-squares method to calculate the mean value (accuracy) and standard deviation (precision) of each sphere . A difference of 10.7 μm between the true and measured values of the sphere radius and a standard deviation of 27.9 μm was observed. The wet-conditioned storage group is referred to as W and the dry-conditioned storage group is denoted as D. As shown in , OcA was measured on day 0 after one month of hydration (W0) after 4 weeks of storage under wet conditions (day 28: W4, D0), and after 4 weeks of storage under dry conditions (day 56: D4). To avoid inter-rater errors, OcA measurements were performed by a single researcher. The 3D data analysis uses a best-fit alignment, which is a process that repeatedly matches the nearest points of the corresponding images in a virtual space on a program and calculates the difference between the two images. Using the 3D measurement software, ZEISS Inspect (ZEISS, Oberkochen, Germany), the best-fit alignment of W0 and W4 (day 28) was performed under wet conditions using the STL data for W0 (day 0) as the reference. For the dry conditions, the STL data for D0 (day 28) were used as the reference, and a best-fit alignment of D0 and D4 (day 56) was used to compare the wet and dry conditions. The same test samples were used for comparison. The surface deviation (mm) was calculated from a best-fit alignment of the STL data for the entire OcA using ZEISS Inspect, with the quantities above the base data in the + direction and those below in the—direction. Since the accuracy test results showed that the error from the true value was within 40 μm, ± 40 μm was used as the cutoff value to calculate the deformation in the—direction below -40 μm and in the + direction above 40 μm. The area was calculated from the corresponding coordinate plane to the surface deviation in the ± direction considered as the amount of deformation, and this was considered as the surface area of deformation (mm 2 ). The volume was calculated from the surface area and surface deviation and was used as the deformed volume. From these data, the maximum amount of deviation in the ± direction and the deformed area and volume in the ± direction were used as outcomes. EZR version 1.68 (Jichi Medical University Saitama Medical Center, Saitama, Japan) was used for the statistical analysis. The maximum deviation in the ± direction, the deformed area in the ± direction, and the volume of the wet condition group (W group) and the dry condition group (D group) were compared using Wilcoxon’s signed rank test. The significance level was set at P < 0.05. A typical example color map of the data superimposed on W0 and W4 for OcA in the W group and D0 and D4 for OcA in the D group is shown . Deformations in the positive direction are mapped in red, those with no significant deformations are mapped in green, and those in the negative direction are mapped in blue. Visual evaluation of the 3D deformation of the OcA showed no significant deformation over time in the W group; however, deformation patterns were detected over time in the D group. The posterior limb showed deformation in the direction (blue), and the occlusal surface showed deformation in the + direction (red) toward the anterior limb and in the direction (blue) toward the posterior limb . A typical example of fault deviation showing a cross-sectional deviation in a molar from the D group x-axis plane is shown . The cross-sectional deviation is the difference in the surface shape between superimposed polygon data in an arbitrary cross-section of the polygon data. also shows + deformation in the forward direction and − deformation in the backward direction around the peak of the step in the guide. A typical example of cross-sectional deviation from the D-group Y-axis plane is shown in . Deformation was observed in a + direction toward the center and a − direction toward the outside. This characteristic deformation was observed in all eight OcAs in Group D. The deformations of OcAs in the W and D groups and the results of the Wilcoxon signed-rank sum test are shown in . The median deviations were 0.069 mm in the—direction, 0.075 mm in the + direction in the W group, 0.118 mm in the—direction, and 0.106 mm in the + direction in the D group. The surface areas of the W group were 3.898 mm 2 and 1.836 mm 2 in the and + directions, respectively, whereas those of the D group were 10.351 mm 2 and 6.612 mm 2 in the and + directions, respectively. The volume of deformation was 0.221 mm 3 in the—direction and 0.107 mm 3 in the + direction in the W group while 0.649 mm 3 in the—direction, and 0.376 mm 3 in the + direction in the D group. All outcomes showing these deformities differed significantly between groups D and W ( P < 0.05). Material properties OcAs are often used for long periods because of their role in distributing excessive occlusal forces, reducing the burden on the TMJ and oral cavity, and protecting the TMJ system from damage to the dentition and periodontal tissues due to pathological occlusal wear, compromised bite, and TMJ arthritis. Therefore, after OcA is precisely fabricated, it is important to minimize its deformation and maintain its morphology to allow it to function. In this study, the surface deviation, surface area, and deformed volume were significantly greater in the D group than in the W group. However, the W group also exhibited slight deformation. The deformed volume was approximately 2–3 times greater in group D than in group W. In a previous study, Izumi performed 3D measurements of the OcA by placing a Co-Cr framework in a heat-cured resin (Acron Clear; GC, Tokyo, Japan) and building an immediate-curing resin (Unifast II Clear, GC) on the occlusal surface. Hirai used OcA, which is made by building a faceted resin (GC) on a 0.75-mm polyester sheet (DURAN, Iserlohn, Germany). In the present study, OcA fabricated from a single material made of PMMA, which is commonly used in clinical practice, was employed to investigate deformation due to storage conditions. PMMA has many advantages, including sufficient mechanical strength, aesthetics, low toxicity, ease of repair, and a simple curing procedure; thus, it has been most used in the manufacture of denture bases since its development in 1945 and as a material for OcA. PMMA is considered to have relatively low deformation after fabrication and excellent color stability. However, none of the molding methods are free from dimensional changes because of polymerization-induced shrinkage, thermal shrinkage, stress deformation, or water absorption. A study on dimensional changes in dentures fabricated with PMMA reported that the greatest dimensional changes occurred in the first month, with no significant changes occurring after two months . Therefore, in this study, to suppress dimensional deformation due to polymerization shrinkage and water absorption, the resin was kept in water for 1 month after fabrication to hydrate the resin and perform measurements on the OcA . Recently, 3D-printed resins have also been used as OcA materials. While no specific reports on OcA exist, studies have evaluated the dimensional stability of 3D-printed resins for dentures and crowns. One study reported an average deformation of 0.201 ± 0.055 mm in 3D-printed denture bases after 28 days of storage under dry conditions without direct sunlight . Another study evaluating water absorption and solubility in 3D-printed crowns found that 3D-printed PMMA resin exhibited higher water absorption than polycarbonate resin but lower than heat-polymerized PMMA resin . It has been suggested that 3D-printed materials are constructed in layers, allowing water to penetrate these layers and cause polymer chain movements, resulting in dimensional changes. The presence of free monomers in 3D-printed materials, owing to their low polymerization, has increased water absorption . The material used in this study absorbed more water than the 3D-printed resin. Therefore, the deformation effect due to drying was considered greater in the heat-polymerized PMMA resin than in the 3D-printed resin. Further investigation is required to evaluate the deformation of 3D-printed OcA over time under different storage conditions. Analysis methodology A spray-type coating was adopted as the optical impression-taking aid material for dental laboratory use because its measurement accuracy has previously been determined, and the thickness irregularity during coating is less for the spray type than for the powder type. However, because the spray type also has a 3 μm effect on the thickness of the measured object, it may be better to use a colored object to improve accuracy without the need to use a spray. To set the cutoff value, Izumi measured the OcAs multiple times before and after use and set the cutoff value at 30 μm from the 95% confidence interval of the measured values. In addition, Hirai measured OcAs twice in three dimensions and, based on the error between the two measurements, assumed that a deformation of 40 μm or less was within the error margin. In the present study, optical impression-taking aids for dental laboratory use were used along with scan spray, and the mean value (accuracy) and standard deviation (precision) of the measurement values were calculated from the algorithm for accuracy testing and set a cutoff value of 40 μm for these accuracy tests . Clinical applications The results of this study indicate that OcAs stored under wet conditions were less deformed than those stored under dry conditions, which is consistent with previous reports . In a previous study in which 3D measurements were performed on dentures fabricated with heat-cured resin (PMMA) under dry and wet conditions, the upper complete dentures showed significantly greater deformation under dry conditions than under wet conditions at 2 weeks. In lower complete dentures, deformation due to drying conditions was observed after 4 weeks of preservation, but no significant difference in deformation between conditions was observed. This difference is suspected to be due to the larger surface area of the lower complete denture than that of the maxillary complete denture, which absorbs sufficient water before deflasking. Because both conditions in our study were kept in water for 1 month after deflasking, we think that significant differences were observed between the two conditions. In the present study, deformations of up to 0.069 mm in the direction and 0.075 mm in the + direction were also observed in the 4-week underwater storage group after one month of hydration. Based on the diffusion coefficient of water, Braden calculated that a 2-mm-thick denture (heat-cured resin) would hydrate in 200 h when stored in water at body temperature (37.5°C) and would take three times longer to hydrate at room temperature (22.5°C) compared to storage at 37.5°C. In other words, it is saturated after approximately 25 days of storage in water at room temperature at a thickness of 2 mm. In the present study, the OcA had a thickness of approximately 3 mm in the occlusal area of the central incisor, and it is possible that the hydration was insufficient in these areas. Based on the abovementioned findings, the hydration period may be affected by the thickness of OcA and the storage temperature. Considering the effects of polymerization-induced shrinkage, water absorption, and OcA expansion, we emphasize that increased maintenance and recall are required, particularly during the early stages of use. Considering that the OcA is a periodontal ligament support device that covers the dentition rather than a mucosal support device such as a denture, deformation is more easily perceived by the user, and discomfort is likely to occur. Previous studies reported a deformation of approximately 2.38 mm in the edentulous maxilla as a deformation of the masticatory mucosa and approximately 2 mm as an acceptable denture deformation . In contrast, Picton et al. reported that normal tooth movement during function is within approximately 100 μm in three dimensions; thus, OcA deformation in clinical practice should remain within this range. This study observed maximum deformations of 75 μm in the W group and 118 μm in the D group as ± direction deviations. Therefore, OcA deformation under dry conditions is considered unacceptable. Deformation was observed under dry conditions in the posterior margin, palate, and buccal areas of the molars. These areas shrank toward the proximal region, and the degree of positive deformation was greater on the centrifugal palate side than on the central palate side. These results may serve as a basis for patient guidance materials explaining how to store OcAs. Patients could visually understand from these data that if they did not wear the OcA for a long period, they would become deformed and ill-fitted because of the storage conditions. In addition, when dentists adjust the OcA, it is possible to predict the deformity site and the adjustment required to correct the fit. Occlusal surfaces may require occlusal adjustment due to shrinkage toward the front and adjustment during difficult removal due to shrinkage in the form of lifting toward the center. The mechanism by which OcAs suppress SB is currently unknown; however, this suppression is short-lived. Previous studies reported three-dimensional (3D) analyses of OcA deformation caused by SB . This study investigated the deformation of OcA under different storage conditions, which may help in the analysis of SB-induced deformation of OcAs. Limitations This study had three limitations. First, to compare each condition using the best-fit alignment, wet and dry conditions were analyzed using the same test sample. However, the deformation under wet conditions may also be influenced by the carryover effect. Because the greatest dimensional changes due to water absorption and expansion occurred during the first month, with no significant changes observed after two months , a 1-month water absorption period was adopted in this study. Nevertheless, deformation was also observed under wet conditions, suggesting that different deformation patterns might have been observed if the same period had been evaluated under dry conditions. However, creating test samples with identical OcA morphologies under both dry and wet conditions is challenging. Therefore, this protocol enabled a corresponding group comparison using test samples with identical OcA morphologies under dry and wet conditions. Second, we adopted the best fit for the entire OcA as the designated range criterion for superimposition. Hirai obtained impressions of the OcA placed on a plaster model and performed 3D measurements. This was because the area of the model palate that was not deformed was used as the reference designation range. However, in this case, deformation may have occurred because of OcA mounting on the model. In this study, only OcA was measured, and a ceramic sphere was installed as a reference point. However, the ceramic sphere moved because of the OcA deformation. However, by focusing on this ceramic sphere, the deformation at the back and forth of the OcA became visually apparent. Previous studies evaluating the deformation of heat-cured resins employed linear or vertical cross-sectional analysis . Most previous studies have used optical microscopes or calipers to measure the distances between certain landmark points; however, these methods are limited in determining the overall deformation because the measurement of two points is simply a linear analysis. We believe that using the best-fit measurements in this study revealed three-dimensional changes. Finally, because OcA deformation is influenced by the material thickness, changes in thickness caused by occlusion or alignment may alter the deformation tendencies. For example, an increased angle of the lateral incisal path can thicken the OcA, leading to greater deformation in the thicker areas. Varying occlusal conditions, such as maxillary prognathism, mandibular prognathism, and open bite, affect the pairing of the upper and lower dentitions. These differences can result in variations in OcA thickness and the corresponding deformation tendencies. OcAs are often used for long periods because of their role in distributing excessive occlusal forces, reducing the burden on the TMJ and oral cavity, and protecting the TMJ system from damage to the dentition and periodontal tissues due to pathological occlusal wear, compromised bite, and TMJ arthritis. Therefore, after OcA is precisely fabricated, it is important to minimize its deformation and maintain its morphology to allow it to function. In this study, the surface deviation, surface area, and deformed volume were significantly greater in the D group than in the W group. However, the W group also exhibited slight deformation. The deformed volume was approximately 2–3 times greater in group D than in group W. In a previous study, Izumi performed 3D measurements of the OcA by placing a Co-Cr framework in a heat-cured resin (Acron Clear; GC, Tokyo, Japan) and building an immediate-curing resin (Unifast II Clear, GC) on the occlusal surface. Hirai used OcA, which is made by building a faceted resin (GC) on a 0.75-mm polyester sheet (DURAN, Iserlohn, Germany). In the present study, OcA fabricated from a single material made of PMMA, which is commonly used in clinical practice, was employed to investigate deformation due to storage conditions. PMMA has many advantages, including sufficient mechanical strength, aesthetics, low toxicity, ease of repair, and a simple curing procedure; thus, it has been most used in the manufacture of denture bases since its development in 1945 and as a material for OcA. PMMA is considered to have relatively low deformation after fabrication and excellent color stability. However, none of the molding methods are free from dimensional changes because of polymerization-induced shrinkage, thermal shrinkage, stress deformation, or water absorption. A study on dimensional changes in dentures fabricated with PMMA reported that the greatest dimensional changes occurred in the first month, with no significant changes occurring after two months . Therefore, in this study, to suppress dimensional deformation due to polymerization shrinkage and water absorption, the resin was kept in water for 1 month after fabrication to hydrate the resin and perform measurements on the OcA . Recently, 3D-printed resins have also been used as OcA materials. While no specific reports on OcA exist, studies have evaluated the dimensional stability of 3D-printed resins for dentures and crowns. One study reported an average deformation of 0.201 ± 0.055 mm in 3D-printed denture bases after 28 days of storage under dry conditions without direct sunlight . Another study evaluating water absorption and solubility in 3D-printed crowns found that 3D-printed PMMA resin exhibited higher water absorption than polycarbonate resin but lower than heat-polymerized PMMA resin . It has been suggested that 3D-printed materials are constructed in layers, allowing water to penetrate these layers and cause polymer chain movements, resulting in dimensional changes. The presence of free monomers in 3D-printed materials, owing to their low polymerization, has increased water absorption . The material used in this study absorbed more water than the 3D-printed resin. Therefore, the deformation effect due to drying was considered greater in the heat-polymerized PMMA resin than in the 3D-printed resin. Further investigation is required to evaluate the deformation of 3D-printed OcA over time under different storage conditions. A spray-type coating was adopted as the optical impression-taking aid material for dental laboratory use because its measurement accuracy has previously been determined, and the thickness irregularity during coating is less for the spray type than for the powder type. However, because the spray type also has a 3 μm effect on the thickness of the measured object, it may be better to use a colored object to improve accuracy without the need to use a spray. To set the cutoff value, Izumi measured the OcAs multiple times before and after use and set the cutoff value at 30 μm from the 95% confidence interval of the measured values. In addition, Hirai measured OcAs twice in three dimensions and, based on the error between the two measurements, assumed that a deformation of 40 μm or less was within the error margin. In the present study, optical impression-taking aids for dental laboratory use were used along with scan spray, and the mean value (accuracy) and standard deviation (precision) of the measurement values were calculated from the algorithm for accuracy testing and set a cutoff value of 40 μm for these accuracy tests . The results of this study indicate that OcAs stored under wet conditions were less deformed than those stored under dry conditions, which is consistent with previous reports . In a previous study in which 3D measurements were performed on dentures fabricated with heat-cured resin (PMMA) under dry and wet conditions, the upper complete dentures showed significantly greater deformation under dry conditions than under wet conditions at 2 weeks. In lower complete dentures, deformation due to drying conditions was observed after 4 weeks of preservation, but no significant difference in deformation between conditions was observed. This difference is suspected to be due to the larger surface area of the lower complete denture than that of the maxillary complete denture, which absorbs sufficient water before deflasking. Because both conditions in our study were kept in water for 1 month after deflasking, we think that significant differences were observed between the two conditions. In the present study, deformations of up to 0.069 mm in the direction and 0.075 mm in the + direction were also observed in the 4-week underwater storage group after one month of hydration. Based on the diffusion coefficient of water, Braden calculated that a 2-mm-thick denture (heat-cured resin) would hydrate in 200 h when stored in water at body temperature (37.5°C) and would take three times longer to hydrate at room temperature (22.5°C) compared to storage at 37.5°C. In other words, it is saturated after approximately 25 days of storage in water at room temperature at a thickness of 2 mm. In the present study, the OcA had a thickness of approximately 3 mm in the occlusal area of the central incisor, and it is possible that the hydration was insufficient in these areas. Based on the abovementioned findings, the hydration period may be affected by the thickness of OcA and the storage temperature. Considering the effects of polymerization-induced shrinkage, water absorption, and OcA expansion, we emphasize that increased maintenance and recall are required, particularly during the early stages of use. Considering that the OcA is a periodontal ligament support device that covers the dentition rather than a mucosal support device such as a denture, deformation is more easily perceived by the user, and discomfort is likely to occur. Previous studies reported a deformation of approximately 2.38 mm in the edentulous maxilla as a deformation of the masticatory mucosa and approximately 2 mm as an acceptable denture deformation . In contrast, Picton et al. reported that normal tooth movement during function is within approximately 100 μm in three dimensions; thus, OcA deformation in clinical practice should remain within this range. This study observed maximum deformations of 75 μm in the W group and 118 μm in the D group as ± direction deviations. Therefore, OcA deformation under dry conditions is considered unacceptable. Deformation was observed under dry conditions in the posterior margin, palate, and buccal areas of the molars. These areas shrank toward the proximal region, and the degree of positive deformation was greater on the centrifugal palate side than on the central palate side. These results may serve as a basis for patient guidance materials explaining how to store OcAs. Patients could visually understand from these data that if they did not wear the OcA for a long period, they would become deformed and ill-fitted because of the storage conditions. In addition, when dentists adjust the OcA, it is possible to predict the deformity site and the adjustment required to correct the fit. Occlusal surfaces may require occlusal adjustment due to shrinkage toward the front and adjustment during difficult removal due to shrinkage in the form of lifting toward the center. The mechanism by which OcAs suppress SB is currently unknown; however, this suppression is short-lived. Previous studies reported three-dimensional (3D) analyses of OcA deformation caused by SB . This study investigated the deformation of OcA under different storage conditions, which may help in the analysis of SB-induced deformation of OcAs. This study had three limitations. First, to compare each condition using the best-fit alignment, wet and dry conditions were analyzed using the same test sample. However, the deformation under wet conditions may also be influenced by the carryover effect. Because the greatest dimensional changes due to water absorption and expansion occurred during the first month, with no significant changes observed after two months , a 1-month water absorption period was adopted in this study. Nevertheless, deformation was also observed under wet conditions, suggesting that different deformation patterns might have been observed if the same period had been evaluated under dry conditions. However, creating test samples with identical OcA morphologies under both dry and wet conditions is challenging. Therefore, this protocol enabled a corresponding group comparison using test samples with identical OcA morphologies under dry and wet conditions. Second, we adopted the best fit for the entire OcA as the designated range criterion for superimposition. Hirai obtained impressions of the OcA placed on a plaster model and performed 3D measurements. This was because the area of the model palate that was not deformed was used as the reference designation range. However, in this case, deformation may have occurred because of OcA mounting on the model. In this study, only OcA was measured, and a ceramic sphere was installed as a reference point. However, the ceramic sphere moved because of the OcA deformation. However, by focusing on this ceramic sphere, the deformation at the back and forth of the OcA became visually apparent. Previous studies evaluating the deformation of heat-cured resins employed linear or vertical cross-sectional analysis . Most previous studies have used optical microscopes or calipers to measure the distances between certain landmark points; however, these methods are limited in determining the overall deformation because the measurement of two points is simply a linear analysis. We believe that using the best-fit measurements in this study revealed three-dimensional changes. Finally, because OcA deformation is influenced by the material thickness, changes in thickness caused by occlusion or alignment may alter the deformation tendencies. For example, an increased angle of the lateral incisal path can thicken the OcA, leading to greater deformation in the thicker areas. Varying occlusal conditions, such as maxillary prognathism, mandibular prognathism, and open bite, affect the pairing of the upper and lower dentitions. These differences can result in variations in OcA thickness and the corresponding deformation tendencies. OcAs made of heat-polymerized resin (PMMA) showed greater deformation in terms of surface deviation, deformed area, and deformed volume under dry conditions than underwater conditions after 4 weeks of storage. Under dry conditions, deformation occurred such that the center of the product shrank toward the center and the centrifugal portion was lifted toward the palate. S1 Table Summary of OcA deformation (raw data). (XLSX) |
Can we design the next generation of digital health communication programs by leveraging the power of artificial intelligence to segment target audiences, bolster impact and deliver differentiated services? A machine learning analysis of survey data from rural India | ce338d47-3d5f-4fb2-a35b-69ad60012d1c | 10030469 | Health Communication[mh] | Digital health solutions have the potential to address critical gaps in information access and service delivery, which underpin high mortality. Mobile health communication programmes, which provide information directly to beneficiaries, are among the few examples of digital health solutions to have scaled widely in a range of settings. Historically, these solutions have been designed as ‘blunt instruments’—providing the same content, with the same frequency, using the same digital channel to large target populations. While this approach has enabled solutions to scale, it has contributed to variability in their reach and impact, due in part to differences in women’s access to and use of mobile phones, particularly in low-income and middle-income countries. Despite near ubiquitous ownership of mobile phones at a household level, a growing body of evidence suggests that there is a substantial gap between men and women’s ownership, access to and use of mobile phones. In India, there is a 45% gap between women’s reported access to a phone and ownership at a household level. Variations in the size of the gap have been observed across states and urban/rural areas, and by sociodemographic characteristics, including education, caste and socioeconomic status. Among women with reported access to a mobile phone, the gender gap further persists in the use of mobiles, in part because of patriarchal gender norms and limited digital skills. Collectively, these gender gaps underscore the need to consider inequities in phone access and use patterns when designing and implementing direct to beneficiary (D2B) mobile health communication programmes. Kilkari, designed and scaled by BBC Media Action in collaboration with the Ministry of Health and Family Welfare, is India’s largest D2B mobile health information programme. When BBC Media Action transitioned Kilkari to the national government in April 2019, it had been implemented in 13 states and reached over 10 million women and their families. Evidence on the programme’s impact from a randomised control trial conducted in Madhya Pradesh, India, between 2018 and 2021, suggests that across study arms, Kilkari was associated with a 3.7% increase in modern reversible contraceptive use (RR: 1.12, 95% CI: 1.03 to 1.21, p=0.007), and a 2.0% decrease in the proportion of males or females sterilised since the birth of the child (RR: 0.85, 95% CI: 0.74 to 0.97, p=0.016). The programme’s impact on contraceptive use, however, varied across key population subgroups. Among women exposed to 50% or more of the Kilkari content as compared with those not exposed, differences in reversible method use were greatest for those in the poorest socioeconomic strata (15.8% higher), for those in disadvantaged castes (12.0% higher), and for those with any male child (9.9% higher). Kilkari’s overall and varied impact across beneficiary groups raises important questions about whether the differential targeting of women and their families might lead to efficiency gains and deepen impact. In this manuscript, we argue that to maximise reach, exposure and deepen impact, the future design of mobile health communication solutions will need to consider the heterogeneity of beneficiaries, including within husband–wife couples, and move away from a one-size-fits all model towards differentiated programme design and delivery. Drawing from husbands’ and wives’ survey data captured as part of a randomised controlled trial (RCT) of Kilkari in Madhya Pradesh India, we used a three-step process involving K-Means clustering and Least Absolute Shrinkage and Selection Operator (Lasso) regression to segment couples into distinct clusters. We then assess differences in health behaviours across respondents in both study arms of the RCT. Findings are anticipated to inform future efforts to capture data and refine methods for segmenting beneficiary populations and in turn optimising the design and delivery of mobile health communication programmes in India and elsewhere globally. Kilkari program overview Kilkari is an outbound service that makes weekly, stage-based, prerecorded calls about reproductive, maternal, neonatal and child health (RMNCH) directly to families’ mobile phones, starting from the second trimester of pregnancy until the child is 1 year old. Kilkari is comprised of 90 min of RMNCH content sent via 72 once weekly voice calls (average call duration: 1 min, 15 s). Approximately 18% of cumulative call content is on family planning; 13% on child immunisation; 13% on nutrition; 12% on infant feeding; 10% on pregnancy care; 7% on entitlements; 7% on diarrhoea; 7% on postnatal care; and the remainder on a range of topics including intrapartum care, water and sanitation, and early childhood development. BBC Media Action designed and piloted Kilkari in the Indian state of Bihar in 2012–2013, and then redesigned and scaled it in collaboration with the Ministry of Health and Family Welfare between 2015 and 2019. Evidence on the evaluation design and programme impact are reported elsewhere. Setting Data used in this analysis were collected from four districts of the central Indian state of Madhya Pradesh as part of the impact evaluation of Kilkari described elsewhere. Madhya Pradesh (population 75 million) is home to an estimated 20% of India’s population and falls below national averages for most sociodemographic and health indicators. Wide differences by gender and between urban and rural areas persist for wide range of indicators including literacy, phone access and health seeking behaviours. Among men and women 15–49 years of age, 59% of women (78% urban and 51% rural) were literate as compared with 82% of men in 2015–2016. Among literate women, 23% had 10 or more years of schooling (44% urban and 14% rural). Despite near universal access to phones at a household level, only 19% of women in rural areas and 50% in urban had access to a phone that they themselves could use in 2015. Among pregnant women, over half (52%) of pregnant women received the recommended four antenatal care (ANC) visits in urban areas as compared with only 30% in rural areas. Despite high rates of institutional delivery (94%) in urban areas, only 76% of women in rural areas reported delivering in a health facility in 2015. These disparities underscore the population heterogeneity within and across Madhya Pradesh. Sample population The samples for this study were obtained through cross-sectional surveys administered between 2018 and 2020 to women (n=5095) with access to a mobile phone and their husbands (n=3842) in four districts of Madhya Pradesh. At the time of the first survey (2018–2019), the women were 4–7 months pregnant; the latter survey (2019–2020) reinterviewed the same women at 12 months post partum. Their husbands were only interviewed once, during the latter survey round. The surveys spanned 1.5 hours in length. In this analysis, modules on household assets and member characteristics; phone access and use, including observed digital skills (navigate interactive voice response (IVR) prompts, give a missed call, store contacts on a phone, open SMS, read SMS) were used to develop models. Data on practice for maternal and child health behaviours, including infant and young child feeding, family planning, pregnancy and postpartum care were used to explore the differential impact of Kilkari across clusters but not used in the development of clusters. Approach to segmentation presents a framework used for developing homogenous clusters of men and women in four districts of rural Madhya Pradesh India. describes the steps undertaken at each point in the framework in detail. We started with data elements collected on phone access and use as well as population sociodemographic characteristics collected as part of a cross-sectional survey described elsewhere. Unsupervised learning was undertaken using K-Means cluster and strong signals were identified. Strong signals were defined as variables that had at least a prevalence of 70% in one or more clusters and differed from another cluster by 50% or more. For example, 6% of men own a smart phone in Cluster 1, 88% in Cluster 2 and 75% in Cluster 3. Therefore, having a smart phone can be considered as a strong signal. Additional details are summarised in . Once defined, we then explored differences in healthcare practices across study clusters among those exposed and not exposed to Kilkari within each cluster. Box 1 Stepwise process for developing and refining a machine learning approach for population segmentation Data collected from special surveys like the couple’s dataset used here are relatively smaller in terms of sample size but large with regard to the number of data elements available. In such high-dimensional data, there are many irrelevant dimensions which can mask existing clusters in noisy data, making more difficult the development of effective clustering methods. Several approaches have been proposed to address this problem. They can be grouped into two categories: static or adaptive dimensionality reduction , including principal components analysis and subspace clustering consisting on selecting a small number of original dimensions (features) in some unsupervised way or using expert knowledge so that clusters become more obvious in the subspace. In this study, we combined subspace clustering using expert knowledge and adaptive dimensionality reduction to find subspace where clusters are most well separated and well defined. Therefore, as part of subspace clustering, we chose to start with couples’ survey data, including variables related to sociodemographic characteristic, phone ownership, use and literacy . Emergent clusters were overlapping. We decided to use men’s survey data on phone access and use as a starting point. Step 1. Defining variables which characterise homogenous groups Analyses started with a predefined set of data elements captured as part of a men’s cross-sectional survey including sociodemographic characteristics and phone access and use. K-Means clustering was used to identify clusters and the elbow method was used to define the optimal number of clusters. Strong signals were then identified. Variables which had at least a prevalence of 70% in one or more clusters and differed from another cluster by 50% or more were considered to have a strong signal. Step 2. Model strengthen through the identification and addition of new variables Once an initial model was developed drawing from the predefined set of data from the men’s survey and strong signals were identified, we reviewed available data from the combined dataset (data from the men’s survey and women’s survey). Signal strength was used as an outcome variable or target in a linear regression with L1 regularisation or Lasso regression (Least Absolute Shrinkage and Selection Operator). Regularisation is a technique used in supervised learning to avoid overfitting. Lasso regression adds absolute value of magnitude of coefficient as penalty term to the loss function. The loss function becomes: L o s s = E r r o r ( y , y ) + α ∑ i = 1 N | ω i | where ω i are coefficients of linear regression y = ω 1 x 1 + ω 2 x 2 + … + ω N x N + b . Lasso regression works well for selecting features in very large datasets as it shrinks the less important features of coefficients to 0. Merged women’s survey and men’s survey data were used as predictors for the regression, excluding variables related to heath knowledge and practices. We ended up with a sample of 3484 rows and 1725 variables after data preprocessing. Step 3. Refining clusters using supervised learning We then reran K-Means clustering with three clusters (K=3) using important features selected by Lasso regression. This methodology was used to refine the clusters and subsequently identify new strong signals. After step 3 was conducted, we repeated step 2, and kept on iteratively repeating step 2 and 3 until there was no gain in strong signals. Data preparation and results formatting have been conducted in R V.4.1.1, K-Means clustering has been performed in Python V.3.8.5. 10.1136/bmjopen-2022-063354.supp1 Supplementary data Patient and public involvement Patients were first engaged on identification in their households as part of a household listing carried out in mid/late 2018. Those meeting eligibility criteria were interviewed as part of the baseline survey, and ultimately randomised to the intervention and control arms. Prior to the administration of the baseline, a small number of patients were involved in the refinement of survey tools through qualitative interviews, including cognitive interviews, which were carried out to optimise survey questions, including the language and translation used. Finalised tools were administered to patients at baseline and endline, and for a subsample of the study population, additional interviews carried out over the phone and via qualitative interviews between the baseline and endline surveys. Unfortunately, because travel restrictions associated with COVID-19, findings were not disseminated back to community members. K-Means algorithm As part of steps 1 and 3, K-Means algorithms were used . We chose to use K-Means algorithm because of its simplicity and speed to handle large dataset compared with hierarchical clustering. A K-Means algorithm is one method of cluster analysis designed to uncover natural groupings within a heterogeneous population by minimising Euclidean distance between them. When using a K-Means algorithm, the first step is to choose the number of clusters K that will be generated. The algorithm starts by selecting K points randomly as the initial centres (also known as cluster means or centroids) and then iteratively assigns each observation to the nearest centre. Next, the algorithm computes the new mean value (centroid) of each cluster’s new set of observation. K-Means reiterates this process, assigning observations to the nearest centre. This process repeats until a new iteration no longer reassigns any observations to a new cluster (convergence). Four metrics have been used for the validation of clustering: within cluster sum of squares, silhouette index, Ray-Turi criterion and Calinski-Harabatz criterion. Elbow method was used to find the right K (number of clusters). is a chart showing the within-cluster sum of squares (or inertia) by the number of groups (k value) chosen for several executions of the algorithm. Inertia is a metric that shows how dissimilar the members of a group are. The less inertia there is, the more similarity there is within a cluster (compactness). The main purpose of clustering is not to find 100% compactness, it is rather to find a fair number of groups that could explain with satisfaction a considerable part of the data (k=3 in this case). Silhouette analysis helped to evaluate the goodness of clustering or clustering validation . It can be used to study the separation distance between the resulting clusters. The silhouette plot displays a measure of how close each point in one cluster is to points in the neighbouring clusters. This measure has a range of [−1, 1]. Silhouette coefficients near+1 indicate that the sample is far from the neighbouring clusters. A value of 0 indicates that the sample is very close to the decision boundary between two neighbouring clusters and negative values indicate that those samples might have been assigned to the wrong cluster. shows that choosing three clusters was more efficient than four for the data from the available surveys for two reasons: (1) there were less points with negative silhouettes and (2) the cluster size (thickness) was more uniform for three groupings. Other criteria used to evaluate quality of clustering are obtained by combining the ‘within-cluster compactness index’ and ‘between-cluster spacing index’. Calinski-Harabatz criterion is given by: C ( k ) = T r a c e ( B ) ( n − k ) T r a c e ( W ) ( k − 1 ) and Ray-Turi criterion is given by r ( k ) = d i s t a n c e ( W ) d i s t a n c e ( B ) , where B is the between-cluster covariance matrix (so high values of B denote well-separated clusters) and W is the within-cluster covariance matrix (so low values of W correspond to compact clusters). They both ended up with same conclusions that three clusters were the best choice for the data we had. gives different metrics used and values obtained for various clusters. Kilkari is an outbound service that makes weekly, stage-based, prerecorded calls about reproductive, maternal, neonatal and child health (RMNCH) directly to families’ mobile phones, starting from the second trimester of pregnancy until the child is 1 year old. Kilkari is comprised of 90 min of RMNCH content sent via 72 once weekly voice calls (average call duration: 1 min, 15 s). Approximately 18% of cumulative call content is on family planning; 13% on child immunisation; 13% on nutrition; 12% on infant feeding; 10% on pregnancy care; 7% on entitlements; 7% on diarrhoea; 7% on postnatal care; and the remainder on a range of topics including intrapartum care, water and sanitation, and early childhood development. BBC Media Action designed and piloted Kilkari in the Indian state of Bihar in 2012–2013, and then redesigned and scaled it in collaboration with the Ministry of Health and Family Welfare between 2015 and 2019. Evidence on the evaluation design and programme impact are reported elsewhere. Data used in this analysis were collected from four districts of the central Indian state of Madhya Pradesh as part of the impact evaluation of Kilkari described elsewhere. Madhya Pradesh (population 75 million) is home to an estimated 20% of India’s population and falls below national averages for most sociodemographic and health indicators. Wide differences by gender and between urban and rural areas persist for wide range of indicators including literacy, phone access and health seeking behaviours. Among men and women 15–49 years of age, 59% of women (78% urban and 51% rural) were literate as compared with 82% of men in 2015–2016. Among literate women, 23% had 10 or more years of schooling (44% urban and 14% rural). Despite near universal access to phones at a household level, only 19% of women in rural areas and 50% in urban had access to a phone that they themselves could use in 2015. Among pregnant women, over half (52%) of pregnant women received the recommended four antenatal care (ANC) visits in urban areas as compared with only 30% in rural areas. Despite high rates of institutional delivery (94%) in urban areas, only 76% of women in rural areas reported delivering in a health facility in 2015. These disparities underscore the population heterogeneity within and across Madhya Pradesh. The samples for this study were obtained through cross-sectional surveys administered between 2018 and 2020 to women (n=5095) with access to a mobile phone and their husbands (n=3842) in four districts of Madhya Pradesh. At the time of the first survey (2018–2019), the women were 4–7 months pregnant; the latter survey (2019–2020) reinterviewed the same women at 12 months post partum. Their husbands were only interviewed once, during the latter survey round. The surveys spanned 1.5 hours in length. In this analysis, modules on household assets and member characteristics; phone access and use, including observed digital skills (navigate interactive voice response (IVR) prompts, give a missed call, store contacts on a phone, open SMS, read SMS) were used to develop models. Data on practice for maternal and child health behaviours, including infant and young child feeding, family planning, pregnancy and postpartum care were used to explore the differential impact of Kilkari across clusters but not used in the development of clusters. presents a framework used for developing homogenous clusters of men and women in four districts of rural Madhya Pradesh India. describes the steps undertaken at each point in the framework in detail. We started with data elements collected on phone access and use as well as population sociodemographic characteristics collected as part of a cross-sectional survey described elsewhere. Unsupervised learning was undertaken using K-Means cluster and strong signals were identified. Strong signals were defined as variables that had at least a prevalence of 70% in one or more clusters and differed from another cluster by 50% or more. For example, 6% of men own a smart phone in Cluster 1, 88% in Cluster 2 and 75% in Cluster 3. Therefore, having a smart phone can be considered as a strong signal. Additional details are summarised in . Once defined, we then explored differences in healthcare practices across study clusters among those exposed and not exposed to Kilkari within each cluster. Box 1 Stepwise process for developing and refining a machine learning approach for population segmentation Data collected from special surveys like the couple’s dataset used here are relatively smaller in terms of sample size but large with regard to the number of data elements available. In such high-dimensional data, there are many irrelevant dimensions which can mask existing clusters in noisy data, making more difficult the development of effective clustering methods. Several approaches have been proposed to address this problem. They can be grouped into two categories: static or adaptive dimensionality reduction , including principal components analysis and subspace clustering consisting on selecting a small number of original dimensions (features) in some unsupervised way or using expert knowledge so that clusters become more obvious in the subspace. In this study, we combined subspace clustering using expert knowledge and adaptive dimensionality reduction to find subspace where clusters are most well separated and well defined. Therefore, as part of subspace clustering, we chose to start with couples’ survey data, including variables related to sociodemographic characteristic, phone ownership, use and literacy . Emergent clusters were overlapping. We decided to use men’s survey data on phone access and use as a starting point. Step 1. Defining variables which characterise homogenous groups Analyses started with a predefined set of data elements captured as part of a men’s cross-sectional survey including sociodemographic characteristics and phone access and use. K-Means clustering was used to identify clusters and the elbow method was used to define the optimal number of clusters. Strong signals were then identified. Variables which had at least a prevalence of 70% in one or more clusters and differed from another cluster by 50% or more were considered to have a strong signal. Step 2. Model strengthen through the identification and addition of new variables Once an initial model was developed drawing from the predefined set of data from the men’s survey and strong signals were identified, we reviewed available data from the combined dataset (data from the men’s survey and women’s survey). Signal strength was used as an outcome variable or target in a linear regression with L1 regularisation or Lasso regression (Least Absolute Shrinkage and Selection Operator). Regularisation is a technique used in supervised learning to avoid overfitting. Lasso regression adds absolute value of magnitude of coefficient as penalty term to the loss function. The loss function becomes: L o s s = E r r o r ( y , y ) + α ∑ i = 1 N | ω i | where ω i are coefficients of linear regression y = ω 1 x 1 + ω 2 x 2 + … + ω N x N + b . Lasso regression works well for selecting features in very large datasets as it shrinks the less important features of coefficients to 0. Merged women’s survey and men’s survey data were used as predictors for the regression, excluding variables related to heath knowledge and practices. We ended up with a sample of 3484 rows and 1725 variables after data preprocessing. Step 3. Refining clusters using supervised learning We then reran K-Means clustering with three clusters (K=3) using important features selected by Lasso regression. This methodology was used to refine the clusters and subsequently identify new strong signals. After step 3 was conducted, we repeated step 2, and kept on iteratively repeating step 2 and 3 until there was no gain in strong signals. Data preparation and results formatting have been conducted in R V.4.1.1, K-Means clustering has been performed in Python V.3.8.5. 10.1136/bmjopen-2022-063354.supp1 Supplementary data Analyses started with a predefined set of data elements captured as part of a men’s cross-sectional survey including sociodemographic characteristics and phone access and use. K-Means clustering was used to identify clusters and the elbow method was used to define the optimal number of clusters. Strong signals were then identified. Variables which had at least a prevalence of 70% in one or more clusters and differed from another cluster by 50% or more were considered to have a strong signal. Once an initial model was developed drawing from the predefined set of data from the men’s survey and strong signals were identified, we reviewed available data from the combined dataset (data from the men’s survey and women’s survey). Signal strength was used as an outcome variable or target in a linear regression with L1 regularisation or Lasso regression (Least Absolute Shrinkage and Selection Operator). Regularisation is a technique used in supervised learning to avoid overfitting. Lasso regression adds absolute value of magnitude of coefficient as penalty term to the loss function. The loss function becomes: L o s s = E r r o r ( y , y ) + α ∑ i = 1 N | ω i | where ω i are coefficients of linear regression y = ω 1 x 1 + ω 2 x 2 + … + ω N x N + b . Lasso regression works well for selecting features in very large datasets as it shrinks the less important features of coefficients to 0. Merged women’s survey and men’s survey data were used as predictors for the regression, excluding variables related to heath knowledge and practices. We ended up with a sample of 3484 rows and 1725 variables after data preprocessing. We then reran K-Means clustering with three clusters (K=3) using important features selected by Lasso regression. This methodology was used to refine the clusters and subsequently identify new strong signals. After step 3 was conducted, we repeated step 2, and kept on iteratively repeating step 2 and 3 until there was no gain in strong signals. Data preparation and results formatting have been conducted in R V.4.1.1, K-Means clustering has been performed in Python V.3.8.5. Patients were first engaged on identification in their households as part of a household listing carried out in mid/late 2018. Those meeting eligibility criteria were interviewed as part of the baseline survey, and ultimately randomised to the intervention and control arms. Prior to the administration of the baseline, a small number of patients were involved in the refinement of survey tools through qualitative interviews, including cognitive interviews, which were carried out to optimise survey questions, including the language and translation used. Finalised tools were administered to patients at baseline and endline, and for a subsample of the study population, additional interviews carried out over the phone and via qualitative interviews between the baseline and endline surveys. Unfortunately, because travel restrictions associated with COVID-19, findings were not disseminated back to community members. As part of steps 1 and 3, K-Means algorithms were used . We chose to use K-Means algorithm because of its simplicity and speed to handle large dataset compared with hierarchical clustering. A K-Means algorithm is one method of cluster analysis designed to uncover natural groupings within a heterogeneous population by minimising Euclidean distance between them. When using a K-Means algorithm, the first step is to choose the number of clusters K that will be generated. The algorithm starts by selecting K points randomly as the initial centres (also known as cluster means or centroids) and then iteratively assigns each observation to the nearest centre. Next, the algorithm computes the new mean value (centroid) of each cluster’s new set of observation. K-Means reiterates this process, assigning observations to the nearest centre. This process repeats until a new iteration no longer reassigns any observations to a new cluster (convergence). Four metrics have been used for the validation of clustering: within cluster sum of squares, silhouette index, Ray-Turi criterion and Calinski-Harabatz criterion. Elbow method was used to find the right K (number of clusters). is a chart showing the within-cluster sum of squares (or inertia) by the number of groups (k value) chosen for several executions of the algorithm. Inertia is a metric that shows how dissimilar the members of a group are. The less inertia there is, the more similarity there is within a cluster (compactness). The main purpose of clustering is not to find 100% compactness, it is rather to find a fair number of groups that could explain with satisfaction a considerable part of the data (k=3 in this case). Silhouette analysis helped to evaluate the goodness of clustering or clustering validation . It can be used to study the separation distance between the resulting clusters. The silhouette plot displays a measure of how close each point in one cluster is to points in the neighbouring clusters. This measure has a range of [−1, 1]. Silhouette coefficients near+1 indicate that the sample is far from the neighbouring clusters. A value of 0 indicates that the sample is very close to the decision boundary between two neighbouring clusters and negative values indicate that those samples might have been assigned to the wrong cluster. shows that choosing three clusters was more efficient than four for the data from the available surveys for two reasons: (1) there were less points with negative silhouettes and (2) the cluster size (thickness) was more uniform for three groupings. Other criteria used to evaluate quality of clustering are obtained by combining the ‘within-cluster compactness index’ and ‘between-cluster spacing index’. Calinski-Harabatz criterion is given by: C ( k ) = T r a c e ( B ) ( n − k ) T r a c e ( W ) ( k − 1 ) and Ray-Turi criterion is given by r ( k ) = d i s t a n c e ( W ) d i s t a n c e ( B ) , where B is the between-cluster covariance matrix (so high values of B denote well-separated clusters) and W is the within-cluster covariance matrix (so low values of W correspond to compact clusters). They both ended up with same conclusions that three clusters were the best choice for the data we had. gives different metrics used and values obtained for various clusters. Sample characteristics summarise the sample characteristics by cluster for men and women interviewed. and presents select characteristics with ‘strong signals’ for each cluster. Cluster 1 (n=1408) constitutes 40% of the sample population and was comprised of men and women with low levels of digital access and skills . This cluster included the poorest segment of the sample population: 36% had a primary school or lower education and 40% were from a scheduled tribe/caste. Most men owned a feature (68%) or brick phone (22%); used the phone daily (89%); and while able to navigate IVR prompts (91%), only 29% were able to perform all of the five basic digital skills assessed. Women in this cluster similarly had lower levels of education as compared with other clusters (39% have primary school or less education); used feature (74%) or brick phones (8%); and had low digital skills (15% were able to perform the five basic digital skills assessed). Cluster 2 (n=666; 19% of sample population) is comprised of men with mid-level and women with low digital access and skills. In this cluster, 75% of men owned smartphones, 65% were observed to successfully perform the five basic digital skills assessed and 36% could perform a basic internet search. Men in Cluster 2 also self-reported accessing videos from YouTube (84%) and using WhatsApp (95%). Women in Cluster 2 had low phone ownership; nearly half of women reported owning a phone (38% owned a phone and did not share it, 22% owned and shared a phone)—findings which contradict their husbands’ reports of 0% women’s phone ownership. Only 21% of women in this cluster were observed to be able to successfully perform the five basic digital skills assessed. However, based on husband’s reporting of their wives’ digital skills, 36% of women could search the internet, 37% used WhatsApp, and 66% watched shows on someone else’s phone. Cluster 3 (n=1410; 40% of sample population) is comprised of couples with high-level digital access among both husbands and wives, and lower-level digital skill among wives . An estimated 67% of couples in this cluster were in the richer or richest socioeconomic strata, while 71% of men and 58% of women had high school or higher levels of education. Men in this cluster reported using the internet frequently (85%), were observed to own smart phones (88%) and had high levels of digital skills: 77% could perform the five basic digital skills assessed, 77% could perform a basic internet search and 85% could send a WhatsApp message. When reporting on their wife’s digital access and skills, all men in this cluster reported that their wives’ owned phones (100%), but often shared these phones with their husbands (77%), using them to watch shows (75%), search the internet (55%) or use WhatsApp (57%). However, a much lower level of women interviewed in this cluster were observed to own Feature (57%) or Smart phones (34%) and had moderate digital skills with 41% being able to successfully perform the five basic digital skills assessed. Differences in health outcomes by cluster presents differences in health outcomes by Cluster among those exposed and not exposed to Kilkari as part of the RCT in Madhya Pradesh. Findings suggest that the greatest impact was observed among those exposed to Kilkari in Cluster 2, which is the smallest cluster identified (19% of the sample population). Among this population, differences between exposed and not exposed were 8% for reversible modern contraceptive methods, 7% for immunisation at 10 weeks, 3% for immunisation at 9 months, and 4% for timely immunisation at 10 weeks and 9 months. Additionally, an 8% difference between exposed and not exposed was observed for the proportion of women who report being involved in the decision about what complementary foods to give child. Among Clusters 1 and 3, improvements were observed among those exposed to Kilkari for a small number of outcomes. In Cluster 1, those exposed to Kilkari had a 3%–4% higher rate of immunisation at 6, 10, 14 weeks than those not exposed. In both Clusters 1 and 3, the timeliness of immunisation improved at 10 weeks among those exposed. No improvements were observed for use of modern reversible contraception in either cluster. summarise the sample characteristics by cluster for men and women interviewed. and presents select characteristics with ‘strong signals’ for each cluster. Cluster 1 (n=1408) constitutes 40% of the sample population and was comprised of men and women with low levels of digital access and skills . This cluster included the poorest segment of the sample population: 36% had a primary school or lower education and 40% were from a scheduled tribe/caste. Most men owned a feature (68%) or brick phone (22%); used the phone daily (89%); and while able to navigate IVR prompts (91%), only 29% were able to perform all of the five basic digital skills assessed. Women in this cluster similarly had lower levels of education as compared with other clusters (39% have primary school or less education); used feature (74%) or brick phones (8%); and had low digital skills (15% were able to perform the five basic digital skills assessed). Cluster 2 (n=666; 19% of sample population) is comprised of men with mid-level and women with low digital access and skills. In this cluster, 75% of men owned smartphones, 65% were observed to successfully perform the five basic digital skills assessed and 36% could perform a basic internet search. Men in Cluster 2 also self-reported accessing videos from YouTube (84%) and using WhatsApp (95%). Women in Cluster 2 had low phone ownership; nearly half of women reported owning a phone (38% owned a phone and did not share it, 22% owned and shared a phone)—findings which contradict their husbands’ reports of 0% women’s phone ownership. Only 21% of women in this cluster were observed to be able to successfully perform the five basic digital skills assessed. However, based on husband’s reporting of their wives’ digital skills, 36% of women could search the internet, 37% used WhatsApp, and 66% watched shows on someone else’s phone. Cluster 3 (n=1410; 40% of sample population) is comprised of couples with high-level digital access among both husbands and wives, and lower-level digital skill among wives . An estimated 67% of couples in this cluster were in the richer or richest socioeconomic strata, while 71% of men and 58% of women had high school or higher levels of education. Men in this cluster reported using the internet frequently (85%), were observed to own smart phones (88%) and had high levels of digital skills: 77% could perform the five basic digital skills assessed, 77% could perform a basic internet search and 85% could send a WhatsApp message. When reporting on their wife’s digital access and skills, all men in this cluster reported that their wives’ owned phones (100%), but often shared these phones with their husbands (77%), using them to watch shows (75%), search the internet (55%) or use WhatsApp (57%). However, a much lower level of women interviewed in this cluster were observed to own Feature (57%) or Smart phones (34%) and had moderate digital skills with 41% being able to successfully perform the five basic digital skills assessed. presents differences in health outcomes by Cluster among those exposed and not exposed to Kilkari as part of the RCT in Madhya Pradesh. Findings suggest that the greatest impact was observed among those exposed to Kilkari in Cluster 2, which is the smallest cluster identified (19% of the sample population). Among this population, differences between exposed and not exposed were 8% for reversible modern contraceptive methods, 7% for immunisation at 10 weeks, 3% for immunisation at 9 months, and 4% for timely immunisation at 10 weeks and 9 months. Additionally, an 8% difference between exposed and not exposed was observed for the proportion of women who report being involved in the decision about what complementary foods to give child. Among Clusters 1 and 3, improvements were observed among those exposed to Kilkari for a small number of outcomes. In Cluster 1, those exposed to Kilkari had a 3%–4% higher rate of immunisation at 6, 10, 14 weeks than those not exposed. In both Clusters 1 and 3, the timeliness of immunisation improved at 10 weeks among those exposed. No improvements were observed for use of modern reversible contraception in either cluster. Evidence on the impact of D2B mobile health communication programmes is limited but broadly suggests that they can cost-effectively improve some reproductive, maternal and child health practices. This analysis aims to serve as a proof of concept for segmenting beneficiary populations to support the design of more targeted mobile health communication programmes. We used a three-step iterative process involving a combination of supervised and unsupervised learning (K-Means clustering and Lasso regression) to segment couples into distinct clusters. Three identifiable groups emerge each with differing health behaviours. Findings suggest that exposure the D2B programme Kilkari may have a differential impact among the clusters. Implications for designing future digital solutions Findings demonstrate that the impact of the D2B solution Kilkari varied across homogenous clusters of women with access to mobile phones and their husbands in Madhya Pradesh. Across delivery channels, our analysis indicates that mobile health communication could not be effectively delivered to husbands and wives in Cluster 1 using WhatsApp, because smartphone ownership and WhatsApp use in this cluster are negligible. IVR, on the other hand, could be used to reach couples in Cluster 1, but reach is likely to be sporadic because of high levels of phone sharing with others (78% among men and 57% among women). On the other hand, WhatsApp and YouTube are likely to be effective digital channels for communicating with both husbands and wives in Cluster 3, where most men and women own or use smartphones and WhatsApp. Beyond delivery channels, study findings raise a number of important learnings for content development as well as optimising beneficiary reach and exposure. The creative approach to content created for Cluster 3, where 40% of women are from the richest socioeconomic status and only 17% have never been to school or have a primary school education or less, would need to be very different from the creative approach to content created for Cluster 1, where 53% have a poorest or poorer socioeconomic status, and 39% have never been to school or have a primary school education or less. Similarly, this analysis adds to qualitative findings and provides important insights into how gender norms related to women’s use of mobile phones may effect reach and impact. While few (13–15%) husbands indicated that ‘adults’ need oversight to use mobile phones, men’s perceptions varied when asked about specific use cases. Across all Clusters, nearly half of husbands indicated that their wives needed permission to pick up phone calls from unknown numbers—an important insight for IVR programmes which may make outbound calls without prewarning to beneficiaries. In Clusters 1 and 2, 25% and 29% of husband’s, respectively, report that their wives need permission to answer calls from health workers—as compared with 15% in Cluster 3. While restrictions on SMS and WhatsApp were lower than making or receiving calls, these channels are less viable given women’s limited access to smartphones, low literacy and digital skills. Overall, men’s perceptions on the restrictions needed on the receipt and placement of calls by women was lower for Cluster 3. However, despite the relative wealth of beneficiaries in Cluster 3 (67% were in the richer or richest socioeconomic strata), 48% of women had zero balance on their mobile phones at the time of interview. Collectively, these findings highlight the immense challenges which underpin efforts to facilitate women’s phone access and use. They too underline the criticality of designing mobile health communication content for couples, rather than just wives to ensure the buy-in of male gatekeepers, and for continuing to prioritise face to face communication with women on critical health issues. Approach to segmentation Data in our sample were captured as part of special surveys carried out through the impact evaluation of Kilkari. Future programmes may be tempted to apply the approach undertaken here to existing datasets, including routine health information systems or other forms of government tracking data. In the India context, while these data are likely to be less costly than special surveys, they are comparatively limited in terms of data elements captured—particularly in terms of data ownership of different types of mobile devices, digital skill levels and usage of specific applications or social media platforms. Data quality may also be a significant issue in existing datasets. For example, we estimate that SIM change in our study population was 44% over a 12-month period—a factor which when coupled with the absence of systems to update government tracking registries raises important questions about who is retained in these databases, and therefore able to receive mobile health communications—and who is missing. Among the variables used, men’s phone access and use were most integral to developing distinct clusters. We recommend that future surveys seeking to generate data for designing digital services for women ensure that data elements are captured on men’s phone access and use practices as well as their perception of their wife’s phone access and use. In addition to underlying data, our analytic approach differed from other segmentation analyses. Our work is relatively new in global health literature related to digital health programmes that are positioned as D2B programmes. While similar ML models are being tested in various domains related to public health, they consist exclusively of unsupervised learning or supervised learning, this analysis is the first of its kind focusing on the use of a combination of supervised and unsupervised learning to identify homogenous clusters for targeting of digital health programmes. Data collected from special surveys like the couple’s dataset used here are comparatively smaller in terms of sample size but large with regard to the number of data elements available. An alternative approach to that described in this manuscript might be to develop strata based on population characteristics. Indeed, findings from the impact evaluation published elsewhere suggest that women with access to phones in the most disadvantaged sociodemographic strata (poorest (15.8% higher) and disadvantaged castes (12% higher)) had greater impact when exposed to 50% or more of the Kilkari content as compared with those not exposed. With an approach to segmentation based on these strata of highest impact, we know and understand what divides or groups respondents (eg, socioeconomic status, education) but this may not be enough when they do not explain the underlying reasons for change. In the approach used here, the study population is segmented using multiple characteristics (sociodemographic, digital access and use) simultaneously. The results are clusters comprised of individuals with mixed sociodemographic characteristics which may help to explain the reduced impact observed on health outcomes. Designing a strategy based on previously known/identifiable strata alone has been the basis of targeting in public health but has not maximised reach, exposure and effect to its fullest potential. The approach used here may better group beneficiaries based on their digital access and use characteristics which may serve to increase reach and exposure. However, further research is needed to determine how to deepen impact within these digital clusters. Findings demonstrate that the impact of the D2B solution Kilkari varied across homogenous clusters of women with access to mobile phones and their husbands in Madhya Pradesh. Across delivery channels, our analysis indicates that mobile health communication could not be effectively delivered to husbands and wives in Cluster 1 using WhatsApp, because smartphone ownership and WhatsApp use in this cluster are negligible. IVR, on the other hand, could be used to reach couples in Cluster 1, but reach is likely to be sporadic because of high levels of phone sharing with others (78% among men and 57% among women). On the other hand, WhatsApp and YouTube are likely to be effective digital channels for communicating with both husbands and wives in Cluster 3, where most men and women own or use smartphones and WhatsApp. Beyond delivery channels, study findings raise a number of important learnings for content development as well as optimising beneficiary reach and exposure. The creative approach to content created for Cluster 3, where 40% of women are from the richest socioeconomic status and only 17% have never been to school or have a primary school education or less, would need to be very different from the creative approach to content created for Cluster 1, where 53% have a poorest or poorer socioeconomic status, and 39% have never been to school or have a primary school education or less. Similarly, this analysis adds to qualitative findings and provides important insights into how gender norms related to women’s use of mobile phones may effect reach and impact. While few (13–15%) husbands indicated that ‘adults’ need oversight to use mobile phones, men’s perceptions varied when asked about specific use cases. Across all Clusters, nearly half of husbands indicated that their wives needed permission to pick up phone calls from unknown numbers—an important insight for IVR programmes which may make outbound calls without prewarning to beneficiaries. In Clusters 1 and 2, 25% and 29% of husband’s, respectively, report that their wives need permission to answer calls from health workers—as compared with 15% in Cluster 3. While restrictions on SMS and WhatsApp were lower than making or receiving calls, these channels are less viable given women’s limited access to smartphones, low literacy and digital skills. Overall, men’s perceptions on the restrictions needed on the receipt and placement of calls by women was lower for Cluster 3. However, despite the relative wealth of beneficiaries in Cluster 3 (67% were in the richer or richest socioeconomic strata), 48% of women had zero balance on their mobile phones at the time of interview. Collectively, these findings highlight the immense challenges which underpin efforts to facilitate women’s phone access and use. They too underline the criticality of designing mobile health communication content for couples, rather than just wives to ensure the buy-in of male gatekeepers, and for continuing to prioritise face to face communication with women on critical health issues. Data in our sample were captured as part of special surveys carried out through the impact evaluation of Kilkari. Future programmes may be tempted to apply the approach undertaken here to existing datasets, including routine health information systems or other forms of government tracking data. In the India context, while these data are likely to be less costly than special surveys, they are comparatively limited in terms of data elements captured—particularly in terms of data ownership of different types of mobile devices, digital skill levels and usage of specific applications or social media platforms. Data quality may also be a significant issue in existing datasets. For example, we estimate that SIM change in our study population was 44% over a 12-month period—a factor which when coupled with the absence of systems to update government tracking registries raises important questions about who is retained in these databases, and therefore able to receive mobile health communications—and who is missing. Among the variables used, men’s phone access and use were most integral to developing distinct clusters. We recommend that future surveys seeking to generate data for designing digital services for women ensure that data elements are captured on men’s phone access and use practices as well as their perception of their wife’s phone access and use. In addition to underlying data, our analytic approach differed from other segmentation analyses. Our work is relatively new in global health literature related to digital health programmes that are positioned as D2B programmes. While similar ML models are being tested in various domains related to public health, they consist exclusively of unsupervised learning or supervised learning, this analysis is the first of its kind focusing on the use of a combination of supervised and unsupervised learning to identify homogenous clusters for targeting of digital health programmes. Data collected from special surveys like the couple’s dataset used here are comparatively smaller in terms of sample size but large with regard to the number of data elements available. An alternative approach to that described in this manuscript might be to develop strata based on population characteristics. Indeed, findings from the impact evaluation published elsewhere suggest that women with access to phones in the most disadvantaged sociodemographic strata (poorest (15.8% higher) and disadvantaged castes (12% higher)) had greater impact when exposed to 50% or more of the Kilkari content as compared with those not exposed. With an approach to segmentation based on these strata of highest impact, we know and understand what divides or groups respondents (eg, socioeconomic status, education) but this may not be enough when they do not explain the underlying reasons for change. In the approach used here, the study population is segmented using multiple characteristics (sociodemographic, digital access and use) simultaneously. The results are clusters comprised of individuals with mixed sociodemographic characteristics which may help to explain the reduced impact observed on health outcomes. Designing a strategy based on previously known/identifiable strata alone has been the basis of targeting in public health but has not maximised reach, exposure and effect to its fullest potential. The approach used here may better group beneficiaries based on their digital access and use characteristics which may serve to increase reach and exposure. However, further research is needed to determine how to deepen impact within these digital clusters. Study findings sought to identify distinct clusters of husbands and wives based on their sociodemographic, phone access and use characteristics, and to explore the differential impact of a maternal mobile messaging programme across these clusters. Three identifiable groups emerge each with differing levels of digital access and use. Descriptive analyses suggest that improvements in some health behaviours were observed for a greater number of outcomes in Cluster 2, than in Clusters 1 and 3. These findings suggest that one size fits all mobile health communications solutions may only engage one segment of a target beneficiary population, and offer much promise for future D2B and other digital health programmes which could see greater reach, exposure and impact through differentiated design and implementation. More quantitative and qualitative work is needed to better understand factors driving the differences in impact and what is likely to motivate adoption of target behaviours in different clusters. Our work opens up a new avenue of research into better targeting of beneficiaries using data on variety of domains including sociodemographics, mobile phone access and use. Future work will entail evaluation of the actual platform used for targeting and delivery of the programme in pilot projects. Successful pilots can be scaled up to larger swathes of the population in India and similar setting around the world. Reviewer comments Author's manuscript |
Proteomic profiling of human plasma and intervertebral disc tissue reveals matrisomal, but not plasma, biomarkers of disc degeneration | e677b78d-87ef-4690-afbd-4c5a87a2a4f6 | 11809052 | Anatomy[mh] | The intervertebral disc (IVD) is a fibrocartilaginous tissue that sits in-between adjacent vertebrae and is comprised of a central gel-like nucleus pulposus (NP) enclosed within an outer thick ring of annulus fibrosus (AF). The IVD contains a heterogeneous, integrated network of extracellular matrix (ECM) and cells that support IVD structure and function, thereby providing mechanical support and flexibility to the spine . Over time, the IVD’s structure and function gradually deteriorate due to ageing, injury, or repetitive mechanical stress. IVD degeneration results in loss of NP hydration, ECM degradation, and neovascularisation and neoinnervation, (which ultimately extends through the AF into the NP). Altogether, these pathophysiological changes result in loss of disc height, bulging and often compression of nerve roots, which contributes to the pathogenesis of low back pain (LBP), a debilitating condition affecting millions of individuals globally each year . IVD degeneration is primarily diagnosed through a combination of physical examinations and imaging techniques. Magnetic resonance imaging (MRI) is the most commonly used technique to visualise and monitor the extent of degeneration through identification of gross features such as loss of disc height, bulging and loss of water content . However, MRI techniques only provide a macroscopic view of the IVD and lack detail of changes to the microscopic and molecular structure of the tissue such as alterations in cell density and ECM that are observed in degenerated IVD histologically . This means that small changes to the structure of the tissue as degeneration worsens may not be detected through MRI scans, and as such imaging techniques have limited capacity to detect or monitor ongoing degenerative changes. Thus, there is a need for improved techniques to monitor progression of IVD degeneration to enable tailored and more effective treatment plans. A combination of tissue and blood biomarkers may offer a better alternative for monitoring degenerative changes over time. For example, in cancer studies, the use of combined serum or plasma and tissue biomarkers was shown to improve early detection as well as the sensitivity and specificity of diagnostic tests . However, identifying serum/plasma and tissue biomarkers for IVD degeneration has remained challenging due to the heterogeneous nature of the disease . Past studies have demonstrated changes in inflammatory proteins, such as IL-6, CCL5 and TNFA, in serum from patients with degenerated IVD tissue compared to non-degenerated IVD tissue . Similar changes in CCL5 and TNFA levels were reported in degenerate IVD tissue, at the gene expression level, in degenerated IVD in comparison to non-degenerate IVD . However, these studies were independent and there have been no attempts to stratify changes seen in the IVD tissue during degeneration with those observed in serum or plasma . Furthermore, with more focus given to comparing non-degenerate and degenerate IVD tissues, limited work has been undertaken to understand whether alterations in IVD tissue protein composition, particularly the ECM, are exacerbated as degeneration worsens and how these changes may affect or reflect blood protein composition. The IVD’s ECM comprises a complex network of matrisome and matrisome-associated components that interact to provide mechanical support. The central NP is mainly composed of proteoglycans, primarily aggrecan, that maintain hydration and osmotic pressure, which helps the disc resist compressive forces while maintaining flexibility. The AF predominantly consists of collagens, primarily type I, that provide tensile strength and resistance to shear forces thereby maintaining the disc’s structural integrity. IVD cells constantly remodel their local ECM environment, maintaining a balance between synthesis and degradation, a process driven by proteases including matrix metalloproteinases (MMPs) and a disintegrin and metalloproteinase with thrombospondin motifs (ADAMTSs) . During degeneration, this balance is disrupted resulting in increased ECM degradation and functional impairment. IVD degeneration is irreversible and worsens over time; however, ECM changes that drive degeneration progression have not been fully elucidated. In this study, we characterised changes in ECM protein composition of tissues from patients with mild and severe IVD degeneration to understand how degeneration progression affects the ECM environment in the disc. We also examined the protein composition of matched plasma samples to identify any changes that correlated with those observed in the severe and mild degenerated IVD tissues. We propose that a combination of specific tissue and plasma biomarkers is essential for early diagnosis, monitoring, and personalisation of the treatment for IVD degeneration. Human intervertebral disc tissues Human IVD tissues were obtained from individuals undergoing lumbar discectomy surgery to treat degenerative disc disease. Full written informed consent was provided by all donors before tissue collection. This study was reviewed and approved by the National Research Ethics Service (17/LO/1408). All experiments were conducted in compliance with the committee’s ethical standards and guidelines. IVD tissues were collected from 18 males and 17 females with a mean age of 41.3 ± 10.4 years (Table ). Excised tissues were stored in high-glucose Dulbecco’s modified eagle medium (DMEM, Sigma Aldrich ) following surgery and processed the same day. Whole blood was also collected from these donors to obtain plasma. Tissue processing and histology IVD tissue samples were fixed in 10% neutral buffered formalin ( Sigma-Aldrich ) for 20–24 h and embedded in paraffin blocks. Sections (5 μm) were cut, placed on glass slides, deparaffinised through xylene, and rehydrated in decreasing concentrations of ethanol. Sections were stained with Mayer’s haematoxylin ( Solmedia Laboratory Suppliers ) for 2 min and washed with running tap water for 5 min. Sections were then counterstained with an eosin-Y alcoholic solution with phloxine ( Sigma-Aldrich ) for 10 s, dehydrated, cleared, and then mounted with coverslips. IVD tissues were histologically graded (ranging from 0 to 12) by an experienced histopathologist using the published scoring system described by Sive et al. . The 35 IVD tissues used in this study were classified into mild degenerate (Grades 4–7, n = 18) and severe degenerate (Grades 10–12, n = 17) (Table ). No tissues were classed as non-degenerate (Grades 0–3) as all samples were obtained from individuals undergoing treatment for disc degenerative disease diagnosed through MRI imaging. Tissue mass spectrometry: protein extraction and liquid chromatography-tandem mass spectrometry (LC-MS-MS) To extract protein for mass spectrometry, 25 μl of solubilisation buffer (5% (w/v) SDS in 50mM triethylammonium bicarbonate (TEAB), pH 7.55) was added to deparaffinised, rehydrated IVD tissue sections (2 × 5 μm per tube) and incubated at 95 °C for 20 min followed by 60 °C for 2 h in a thermomixer set to 1400RPM. After cooling to room temperature (RT), 75 μl of 5% (w/v) SDS, 10 M Urea, 50mM TEAB, and 13.33mM DTT (pH 7.55) were added to the samples. All samples were transferred to Covaris tubes and sonicated using a focused ultrasonicator (LE220-plus, Covaris) at 8 W for 20 min (sonicated for 300 s, peak power = 180, average power = 72, duty factor 40%, cycles per burst = 200, delay 15s, then repeated once). Reduced disulfide bridges were alkylated by adding 8 μl of 20mM iodoacetamide and incubating at RT in the dark for 30 min. Lysates were acidified with 12 μl of 12% (v/v) phosphoric acid. Acidified lysates were then centrifuged at 12000xg for 5 min and the supernatant was collected into a clean tube prior to proteolytic digestions using suspension trapping (S-Trap). 600 μl of S-Trap binding buffer (90% (v/v) methanol, 100 mm TEAB, pH 7.1) was added to each lysate, and the total volume was transferred to a micro S-Trap spin column ( Profiti ). Samples were washed ten times with 150 μl S-Trap binding buffer. After washing, 2 μg of trypsin ( Promega ) diluted in 25 μl TEAB (pH 8) was added to the S-Trap and columns were incubated at 47 °C for 1 h. Digested peptides were eluted by the addition of 40 μl 50mM TEAB (pH 8) followed by 40 μl of 0.2% (v/v) formic acid. Hydrophobic peptides were eluted by 40 μl of 30% (v/v) acetonitrile, 0.2% (v/v) formic acid and the total volume (120 μl) was collected and lyophilised in a SpeedVac. Peptides were resuspended in 100 μl of 3% (v/v) acetonitrile, 0.1% (v/v) formic acid and desalted using POROS Oligo R3 beads ( Thermo Fisher Scientific ) in 0.2 μm polyvinylidene fluoride filter plates ( Corning ). Briefly, peptides were mixed with R3 beads for 5 min at 800RPM using a thermomixer. Samples were then washed 10 times with 100 μl 0.1% formic acid for 2 min. Peptides were eluted in 50 μl of 30% acetonitrile. Elution was repeated to obtain a total volume of 100 μl. Samples were lyophilised and stored at 4 °C. For liquid chromatography with tandem mass spectrometry (LC/MS/MS), peptides were resuspended in 10 μl of 5% (v/v) acetonitrile, 0.1% (v/v) formic acid and analysed using an UltiMate 3000 RSLC ( Dionex Corporation ) coupled to a Q Exactive HF Orbitrap ( Thermo Fisher Scientific ) mass spectrometer. Peptide mixtures were separated using a multistep gradient from 95% A (0.1% (v/v) formic acid in water) and 5% B (0.1% (v/v) formic acid in acetonitrile) to 7% B at 1 min, 18% B at 58 min, 27% B in 72 min and 60% B at 74 min at 300nL/min, using a 75 mm × 250 μm inner diameter 1.7 μM CSH C18, analytical column (Waters). Peptides were selected for fragmentation automatically by data-dependent analysis. Mass spectrometers were operated using Xcalibur software (version 4.1.31.9, Thermo Scientific ). Tissue mass spectrometry: data analysis Raw IVD tissue mass spectrometry data files (.raw) were imported into MaxQuant v2.3.1 for analysis . Raw spectra were searched against a reviewed human protein database (UniProt UP000005640, 20,420 entries, July 2024) in Andromeda, MaxQuant’s built-in database search engine. Label-free quantification (LFQ) was activated to quantify protein abundance across samples. Normalisation of intensities across samples was performed using the MaxLFQ algorithm within the MaxQuant software as described in Cox et al. . Trypsin was selected as the digestion enzyme allowing a maximum of two missed cleavage sites. Oxidation of methionine and N-terminal acetylations were set as variable modifications, while carbamidomethylation of cysteine was set as a fixed modification allowing a maximum of five modifications per peptide. The first peptide search mass tolerance was set to 20ppm and the main search mass tolerance was set to 4.5ppm. Match between runs was disabled and the false discovery rate was set to 1% for peptides and proteins. All other parameters were left unchanged. MaxQuant output files were examined and the proteinGroups.txt file was used for downstream analysis. LFQ intensities were imported into Perseus 2.0.11 for further analysis. Data were filtered to remove potential contaminants, proteins only identified by modification site, and reverse hits. Data was filtered so that proteins with valid LFQ intensities in 50% of all samples were kept, leaving a total of 119 proteins. Protein LFQ intensities for the 119 proteins were log 2 transformed, and imputation was performed by replacing values with random numbers drawn from a normal distribution (width = 0.3, downshift = 1.8). Differential expression analysis between severe and mild degenerate IVD tissues was performed using a two-sample T-test and a permutation-based false discovery rate method was used for multiple comparisons. Data matrices were exported from Perseus for further analysis in R v4.2.2. and RStudio. Principal component analysis was performed using the ‘ prcomp’ function in R and visualised using the ‘ autoplot’ function in ggplot2 . Differentially expressed proteins were visualised using the EnchancedVolcano package . KEGG enrichment pathway and gene ontology analyses were performed using Enrichr . Gene set enrichment analysis was performed using clusterProfiler and visualised using enrichplot . Matrisome and non-matrisome proteins were categorised using the Naba et al. human matrisome database . Plasma mass spectrometry: protein extraction and SWATH-MS To obtain plasma, whole blood was collected from the same donors as tissues at time of surgery. Whole blood was drawn into 9 ml red cap S-monovettes containing ethylenediaminetetraacetate tripotassium (K3-EDTA, Sarstedt ). Within 30 min of collection, samples were centrifuged at 1500xg for 15 min at 4 °C to remove blood cells. Plasma supernatant was transferred to clean centrifuge tubes and spun at 2000xg for 14 min at 4 °C to remove any remaining cells. Plasma was then transferred to clean cryovials and frozen until use. For SWATH mass spectrometry (MS) analyses 10 μl of plasma was immunodepleted using Pierce Top 12 Abundant Protein Depletion Spin Columns ( Thermo Scientific ) following the manufacturer’s instructions. Depleted plasma was then concentrated, and buffer was exchanged by using Amicon Ultra-0.5 Centrifugal Filter Devices ( Merck-Millipore ). Total protein concentration was determined using a BCA protein assay kit ( Thermo Fisher ). The depleted plasma (containing 40 μg of protein) was denatured, reduced and alkylated in 25mM ammonium bicarbonate containing 5mM dithiothreitol ( GE Healthcare ), 50 mM iodoacetamide ( Sigma Aldrich ) and 1% sodium deoxycholate (Sigma Aldrich). Modified sequencing-grade trypsin ( Promega ) was added at a ratio of 10:1 substrate: enzyme and digestion was performed overnight at 37 °C. Digests were subsequently dried in a vacuum centrifuge GenevacTM ( Thermo Fisher Scientific ). Samples were reconstituted in loading buffer containing 2% (v/v) acetonitrile ( Thermos Fisher Scientific ), 0.1% (v/v) formic acid ( Thermo Fisher Scientific ), 100 fmol/μl PepCalMix ( MS Synthetic Peptide Calibration Kit , AB Sciex UK Ltd ) and 10 × index retention time (iRT) standards ( Biognosys AG , Switzerland ). Samples were analysed by SWATH-MS with a micro-flow LC-MS system comprising an Eksigent nanoLC 400 autosampler and an Eksigent nanoLC 425 pump coupled to an AB Sciex 6600 Triple-TOF mass spectrometer with a DuoSpray Ion Source. Liquid chromatography gradient details and MS settings were as described by McGurk et al. . Plasma mass spectrometry: data analysis Raw (.wiff) files were processed using DIANN software v1.8.1 . First, for library-free search, an in silico-predicted spectral library was generated using the UniProt human proteome sequence database (UniProt Proteome ID: UP000005640, count: 20,654, July 2024). The generated spectral library was reuploaded into DIANN, and raw data was analysed using the robust LC (high precision) quantification strategy. Cross-run normalisation was set to ‘retention time-dependent’, and match between runs was disabled. The precursor false-discovery rate (FDR) was set to 1%. Recommended or default settings were used for all other parameters. The protein group output matrix and associated experiment annotation files were imported into FragPipe-Analyst software for further analysis . Before differential expression analysis, the minimum percentage of non-missing values was set to 50% for all samples. No further normalisation was performed at this stage. Perseus-type imputation was performed for the remaining 277 proteins as described above. Differential expression was performed using Limma and the Benjamin Hochberg correction was used for multiple comparisons . Data matrices were exported and visualised in R as described above. Immunohistochemistry For immunohistochemistry, slides were deparaffinised and rehydrated as described above. Antigen retrieval was performed using citrate buffer pH 6 for 20 minutes at 95°C. Slides were allowed to cool to RT and endogenous peroxidase was blocked using 3% (v/v) hydrogen peroxide in industrial methylated spirit (IMS). Tissues were washed with tris-buffered saline (TBS) and non-specific binding was blocked using 25% (v/v) normal goat serum and 1% (w/v) bovine serum albumin (BSA) in TBS for 30 minutes at RT. Tissues were then incubated with 100μl of rabbit anti-human AEBP1 primary antibody (1/150, Abcam AB254973 ) overnight at 4°C. Slides were washed in TBS-Tween and incubated with 100μl of secondary goat anti-rabbit antibody in TBS (1/300) for 30 minutes at RT. Signal amplification was achieved by applying avidin/biotin complex solution ( Vector Laboratories ) to sections for 30 minutes at RT. Sections were rinsed in TBS and 3,3’-diaminobenzidine tetrahydrochloride (DAB) was used for signal detection. Excess DAB was tapped off and the sections were rinsed in deionised water. The slides were then counterstained using Mayer’s haematoxylin (5 min) and rinsed in tap water before they were dehydrated, cleared, and mounted with coverslips. Slides were imaged using an automated 3D Histech Pannoramic P250 slide scanner and processed using SlideViewer software (3DHISTECH). Intervertebral disc primary cell extraction and culture Fresh IVD tissue was finely chopped and placed into a 50 ml tube containing 10 ml serum-free DMEM supplemented with 0.1% (w/v) collagenase type II ( Gibco , 17101015 ) and 2% (v/v) antibiotic-antimycotic solution ( Sigma Aldrich ). Tissues were incubated overnight at 37 °C. Cells were passed through a 70 μm cell strainer and centrifuged at 400xg for 5 min. The cell pellet was resuspended in 10 ml serum-free medium and centrifuged at 400xg for 5 min. Cells were resuspended in 5 ml complete disc cell media (high-glucose DMEM supplemented with 1mM sodium pyruvate, 10 μm L-ascorbic acid 2-phosphate sesquimagnesium salt hydrate, 1% antibiotic-antimycotic solution, and 10% fetal calf serum) and dispensed into T25 cell culture flasks. Cells were cultured at 37 °C and 5% CO 2 until they reached 80% confluency (P0). RNA extraction, cDNA conversion, and quantitative reverse transcription polymerase chain reaction (qRT-PCR) IVD primary cells were trypsinised and lysed using 1 ml TRI Reagent ( Sigma Aldrich ). Lysates were incubated at RT for 5 min. Chloroform (200 μl) was added and each sample was shaken vigorously before centrifugation at 12000xg for 20 min at 4 °C. 250 μl of the aqueous phase was transferred into a new tube containing 250 μl isopropanol. Samples were incubated at RT for 10 min and then centrifuged at 12000xg for 20 min at 4 °C to precipitate RNA. The supernatant was discarded, and RNA pellets were washed twice with ice-cold 70% ethanol and centrifuged at 8000xg for 5 min at 4 °C. RNA pellets were dried for 10 min at RT and eluted in 50 μl 1X Tris-EDTA solution. RNA concentration was quantified using a Nanodrop 1000 spectrophotometer and associated software (Thermo Fisher Scientific). RNA (1 μg) from each sample was converted to cDNA using the High-Capacity RNA-to-cDNA Kit ( Applied Biosystems ) according to manufacturer instructions. cDNA was dilution to 5ng/μl and stored at -20 °C until use. For qRT-PCR, 1 μl of the diluted cDNA was mixed with 5ul Fast SYBR Green Master Mix ( Applied Biosystems ), 2.8 μl water and 0.2 μl of 10 μm human AEBP1 (F: GAGAAGGAGGAGCTGAAGAAAC; R: CGGATCTGGTTGTCCTCAATAC) or GAPDH (F: GGTGTGAACCATGAGAAGTATGA; R: GAGTCCTTCCACGATACCAAAG) forward and reverse primers ( Integrated DNA Technologies ). Data acquisition was performed on a StepOnePlus Real-Time PCR System and StepOne Software ( Applied Biosystems ). Statistics Statistical tests were performed on Graph Prism Software v10. Unpaired t-tests were used to compare differences between severe and mild degenerate protein intensities and gene expression levels. Receiver operating characteristics (ROC) curve analysis was applied to proteins to evaluate the ability to distinguish between mild and severe IVD degeneration groups. The area under the curve (AUC) score was used to summarise the effectiveness of the selected proteins in differentiating mild and severe IVD degeneration, with proteins having an AUC score above 0.75 considered good discriminators. Correlational analysis was performed to evaluate relations between protein intensity and other parameters such as histology grade, and age. Statistical significance was set to p < 0.05 unless stated otherwise. Human IVD tissues were obtained from individuals undergoing lumbar discectomy surgery to treat degenerative disc disease. Full written informed consent was provided by all donors before tissue collection. This study was reviewed and approved by the National Research Ethics Service (17/LO/1408). All experiments were conducted in compliance with the committee’s ethical standards and guidelines. IVD tissues were collected from 18 males and 17 females with a mean age of 41.3 ± 10.4 years (Table ). Excised tissues were stored in high-glucose Dulbecco’s modified eagle medium (DMEM, Sigma Aldrich ) following surgery and processed the same day. Whole blood was also collected from these donors to obtain plasma. IVD tissue samples were fixed in 10% neutral buffered formalin ( Sigma-Aldrich ) for 20–24 h and embedded in paraffin blocks. Sections (5 μm) were cut, placed on glass slides, deparaffinised through xylene, and rehydrated in decreasing concentrations of ethanol. Sections were stained with Mayer’s haematoxylin ( Solmedia Laboratory Suppliers ) for 2 min and washed with running tap water for 5 min. Sections were then counterstained with an eosin-Y alcoholic solution with phloxine ( Sigma-Aldrich ) for 10 s, dehydrated, cleared, and then mounted with coverslips. IVD tissues were histologically graded (ranging from 0 to 12) by an experienced histopathologist using the published scoring system described by Sive et al. . The 35 IVD tissues used in this study were classified into mild degenerate (Grades 4–7, n = 18) and severe degenerate (Grades 10–12, n = 17) (Table ). No tissues were classed as non-degenerate (Grades 0–3) as all samples were obtained from individuals undergoing treatment for disc degenerative disease diagnosed through MRI imaging. To extract protein for mass spectrometry, 25 μl of solubilisation buffer (5% (w/v) SDS in 50mM triethylammonium bicarbonate (TEAB), pH 7.55) was added to deparaffinised, rehydrated IVD tissue sections (2 × 5 μm per tube) and incubated at 95 °C for 20 min followed by 60 °C for 2 h in a thermomixer set to 1400RPM. After cooling to room temperature (RT), 75 μl of 5% (w/v) SDS, 10 M Urea, 50mM TEAB, and 13.33mM DTT (pH 7.55) were added to the samples. All samples were transferred to Covaris tubes and sonicated using a focused ultrasonicator (LE220-plus, Covaris) at 8 W for 20 min (sonicated for 300 s, peak power = 180, average power = 72, duty factor 40%, cycles per burst = 200, delay 15s, then repeated once). Reduced disulfide bridges were alkylated by adding 8 μl of 20mM iodoacetamide and incubating at RT in the dark for 30 min. Lysates were acidified with 12 μl of 12% (v/v) phosphoric acid. Acidified lysates were then centrifuged at 12000xg for 5 min and the supernatant was collected into a clean tube prior to proteolytic digestions using suspension trapping (S-Trap). 600 μl of S-Trap binding buffer (90% (v/v) methanol, 100 mm TEAB, pH 7.1) was added to each lysate, and the total volume was transferred to a micro S-Trap spin column ( Profiti ). Samples were washed ten times with 150 μl S-Trap binding buffer. After washing, 2 μg of trypsin ( Promega ) diluted in 25 μl TEAB (pH 8) was added to the S-Trap and columns were incubated at 47 °C for 1 h. Digested peptides were eluted by the addition of 40 μl 50mM TEAB (pH 8) followed by 40 μl of 0.2% (v/v) formic acid. Hydrophobic peptides were eluted by 40 μl of 30% (v/v) acetonitrile, 0.2% (v/v) formic acid and the total volume (120 μl) was collected and lyophilised in a SpeedVac. Peptides were resuspended in 100 μl of 3% (v/v) acetonitrile, 0.1% (v/v) formic acid and desalted using POROS Oligo R3 beads ( Thermo Fisher Scientific ) in 0.2 μm polyvinylidene fluoride filter plates ( Corning ). Briefly, peptides were mixed with R3 beads for 5 min at 800RPM using a thermomixer. Samples were then washed 10 times with 100 μl 0.1% formic acid for 2 min. Peptides were eluted in 50 μl of 30% acetonitrile. Elution was repeated to obtain a total volume of 100 μl. Samples were lyophilised and stored at 4 °C. For liquid chromatography with tandem mass spectrometry (LC/MS/MS), peptides were resuspended in 10 μl of 5% (v/v) acetonitrile, 0.1% (v/v) formic acid and analysed using an UltiMate 3000 RSLC ( Dionex Corporation ) coupled to a Q Exactive HF Orbitrap ( Thermo Fisher Scientific ) mass spectrometer. Peptide mixtures were separated using a multistep gradient from 95% A (0.1% (v/v) formic acid in water) and 5% B (0.1% (v/v) formic acid in acetonitrile) to 7% B at 1 min, 18% B at 58 min, 27% B in 72 min and 60% B at 74 min at 300nL/min, using a 75 mm × 250 μm inner diameter 1.7 μM CSH C18, analytical column (Waters). Peptides were selected for fragmentation automatically by data-dependent analysis. Mass spectrometers were operated using Xcalibur software (version 4.1.31.9, Thermo Scientific ). Raw IVD tissue mass spectrometry data files (.raw) were imported into MaxQuant v2.3.1 for analysis . Raw spectra were searched against a reviewed human protein database (UniProt UP000005640, 20,420 entries, July 2024) in Andromeda, MaxQuant’s built-in database search engine. Label-free quantification (LFQ) was activated to quantify protein abundance across samples. Normalisation of intensities across samples was performed using the MaxLFQ algorithm within the MaxQuant software as described in Cox et al. . Trypsin was selected as the digestion enzyme allowing a maximum of two missed cleavage sites. Oxidation of methionine and N-terminal acetylations were set as variable modifications, while carbamidomethylation of cysteine was set as a fixed modification allowing a maximum of five modifications per peptide. The first peptide search mass tolerance was set to 20ppm and the main search mass tolerance was set to 4.5ppm. Match between runs was disabled and the false discovery rate was set to 1% for peptides and proteins. All other parameters were left unchanged. MaxQuant output files were examined and the proteinGroups.txt file was used for downstream analysis. LFQ intensities were imported into Perseus 2.0.11 for further analysis. Data were filtered to remove potential contaminants, proteins only identified by modification site, and reverse hits. Data was filtered so that proteins with valid LFQ intensities in 50% of all samples were kept, leaving a total of 119 proteins. Protein LFQ intensities for the 119 proteins were log 2 transformed, and imputation was performed by replacing values with random numbers drawn from a normal distribution (width = 0.3, downshift = 1.8). Differential expression analysis between severe and mild degenerate IVD tissues was performed using a two-sample T-test and a permutation-based false discovery rate method was used for multiple comparisons. Data matrices were exported from Perseus for further analysis in R v4.2.2. and RStudio. Principal component analysis was performed using the ‘ prcomp’ function in R and visualised using the ‘ autoplot’ function in ggplot2 . Differentially expressed proteins were visualised using the EnchancedVolcano package . KEGG enrichment pathway and gene ontology analyses were performed using Enrichr . Gene set enrichment analysis was performed using clusterProfiler and visualised using enrichplot . Matrisome and non-matrisome proteins were categorised using the Naba et al. human matrisome database . To obtain plasma, whole blood was collected from the same donors as tissues at time of surgery. Whole blood was drawn into 9 ml red cap S-monovettes containing ethylenediaminetetraacetate tripotassium (K3-EDTA, Sarstedt ). Within 30 min of collection, samples were centrifuged at 1500xg for 15 min at 4 °C to remove blood cells. Plasma supernatant was transferred to clean centrifuge tubes and spun at 2000xg for 14 min at 4 °C to remove any remaining cells. Plasma was then transferred to clean cryovials and frozen until use. For SWATH mass spectrometry (MS) analyses 10 μl of plasma was immunodepleted using Pierce Top 12 Abundant Protein Depletion Spin Columns ( Thermo Scientific ) following the manufacturer’s instructions. Depleted plasma was then concentrated, and buffer was exchanged by using Amicon Ultra-0.5 Centrifugal Filter Devices ( Merck-Millipore ). Total protein concentration was determined using a BCA protein assay kit ( Thermo Fisher ). The depleted plasma (containing 40 μg of protein) was denatured, reduced and alkylated in 25mM ammonium bicarbonate containing 5mM dithiothreitol ( GE Healthcare ), 50 mM iodoacetamide ( Sigma Aldrich ) and 1% sodium deoxycholate (Sigma Aldrich). Modified sequencing-grade trypsin ( Promega ) was added at a ratio of 10:1 substrate: enzyme and digestion was performed overnight at 37 °C. Digests were subsequently dried in a vacuum centrifuge GenevacTM ( Thermo Fisher Scientific ). Samples were reconstituted in loading buffer containing 2% (v/v) acetonitrile ( Thermos Fisher Scientific ), 0.1% (v/v) formic acid ( Thermo Fisher Scientific ), 100 fmol/μl PepCalMix ( MS Synthetic Peptide Calibration Kit , AB Sciex UK Ltd ) and 10 × index retention time (iRT) standards ( Biognosys AG , Switzerland ). Samples were analysed by SWATH-MS with a micro-flow LC-MS system comprising an Eksigent nanoLC 400 autosampler and an Eksigent nanoLC 425 pump coupled to an AB Sciex 6600 Triple-TOF mass spectrometer with a DuoSpray Ion Source. Liquid chromatography gradient details and MS settings were as described by McGurk et al. . Raw (.wiff) files were processed using DIANN software v1.8.1 . First, for library-free search, an in silico-predicted spectral library was generated using the UniProt human proteome sequence database (UniProt Proteome ID: UP000005640, count: 20,654, July 2024). The generated spectral library was reuploaded into DIANN, and raw data was analysed using the robust LC (high precision) quantification strategy. Cross-run normalisation was set to ‘retention time-dependent’, and match between runs was disabled. The precursor false-discovery rate (FDR) was set to 1%. Recommended or default settings were used for all other parameters. The protein group output matrix and associated experiment annotation files were imported into FragPipe-Analyst software for further analysis . Before differential expression analysis, the minimum percentage of non-missing values was set to 50% for all samples. No further normalisation was performed at this stage. Perseus-type imputation was performed for the remaining 277 proteins as described above. Differential expression was performed using Limma and the Benjamin Hochberg correction was used for multiple comparisons . Data matrices were exported and visualised in R as described above. For immunohistochemistry, slides were deparaffinised and rehydrated as described above. Antigen retrieval was performed using citrate buffer pH 6 for 20 minutes at 95°C. Slides were allowed to cool to RT and endogenous peroxidase was blocked using 3% (v/v) hydrogen peroxide in industrial methylated spirit (IMS). Tissues were washed with tris-buffered saline (TBS) and non-specific binding was blocked using 25% (v/v) normal goat serum and 1% (w/v) bovine serum albumin (BSA) in TBS for 30 minutes at RT. Tissues were then incubated with 100μl of rabbit anti-human AEBP1 primary antibody (1/150, Abcam AB254973 ) overnight at 4°C. Slides were washed in TBS-Tween and incubated with 100μl of secondary goat anti-rabbit antibody in TBS (1/300) for 30 minutes at RT. Signal amplification was achieved by applying avidin/biotin complex solution ( Vector Laboratories ) to sections for 30 minutes at RT. Sections were rinsed in TBS and 3,3’-diaminobenzidine tetrahydrochloride (DAB) was used for signal detection. Excess DAB was tapped off and the sections were rinsed in deionised water. The slides were then counterstained using Mayer’s haematoxylin (5 min) and rinsed in tap water before they were dehydrated, cleared, and mounted with coverslips. Slides were imaged using an automated 3D Histech Pannoramic P250 slide scanner and processed using SlideViewer software (3DHISTECH). Fresh IVD tissue was finely chopped and placed into a 50 ml tube containing 10 ml serum-free DMEM supplemented with 0.1% (w/v) collagenase type II ( Gibco , 17101015 ) and 2% (v/v) antibiotic-antimycotic solution ( Sigma Aldrich ). Tissues were incubated overnight at 37 °C. Cells were passed through a 70 μm cell strainer and centrifuged at 400xg for 5 min. The cell pellet was resuspended in 10 ml serum-free medium and centrifuged at 400xg for 5 min. Cells were resuspended in 5 ml complete disc cell media (high-glucose DMEM supplemented with 1mM sodium pyruvate, 10 μm L-ascorbic acid 2-phosphate sesquimagnesium salt hydrate, 1% antibiotic-antimycotic solution, and 10% fetal calf serum) and dispensed into T25 cell culture flasks. Cells were cultured at 37 °C and 5% CO 2 until they reached 80% confluency (P0). IVD primary cells were trypsinised and lysed using 1 ml TRI Reagent ( Sigma Aldrich ). Lysates were incubated at RT for 5 min. Chloroform (200 μl) was added and each sample was shaken vigorously before centrifugation at 12000xg for 20 min at 4 °C. 250 μl of the aqueous phase was transferred into a new tube containing 250 μl isopropanol. Samples were incubated at RT for 10 min and then centrifuged at 12000xg for 20 min at 4 °C to precipitate RNA. The supernatant was discarded, and RNA pellets were washed twice with ice-cold 70% ethanol and centrifuged at 8000xg for 5 min at 4 °C. RNA pellets were dried for 10 min at RT and eluted in 50 μl 1X Tris-EDTA solution. RNA concentration was quantified using a Nanodrop 1000 spectrophotometer and associated software (Thermo Fisher Scientific). RNA (1 μg) from each sample was converted to cDNA using the High-Capacity RNA-to-cDNA Kit ( Applied Biosystems ) according to manufacturer instructions. cDNA was dilution to 5ng/μl and stored at -20 °C until use. For qRT-PCR, 1 μl of the diluted cDNA was mixed with 5ul Fast SYBR Green Master Mix ( Applied Biosystems ), 2.8 μl water and 0.2 μl of 10 μm human AEBP1 (F: GAGAAGGAGGAGCTGAAGAAAC; R: CGGATCTGGTTGTCCTCAATAC) or GAPDH (F: GGTGTGAACCATGAGAAGTATGA; R: GAGTCCTTCCACGATACCAAAG) forward and reverse primers ( Integrated DNA Technologies ). Data acquisition was performed on a StepOnePlus Real-Time PCR System and StepOne Software ( Applied Biosystems ). Statistical tests were performed on Graph Prism Software v10. Unpaired t-tests were used to compare differences between severe and mild degenerate protein intensities and gene expression levels. Receiver operating characteristics (ROC) curve analysis was applied to proteins to evaluate the ability to distinguish between mild and severe IVD degeneration groups. The area under the curve (AUC) score was used to summarise the effectiveness of the selected proteins in differentiating mild and severe IVD degeneration, with proteins having an AUC score above 0.75 considered good discriminators. Correlational analysis was performed to evaluate relations between protein intensity and other parameters such as histology grade, and age. Statistical significance was set to p < 0.05 unless stated otherwise. Histological analyses of surgically excised intervertebral disc tissues reveal a spectrum of degenerative changes among donors undergoing discectomy for low back pain Histological characterisation of IVD tissue excised during discectomy can provide insight into structural changes associated with the progression of degeneration not normally detected by other imaging techniques such as MRI. We examined the histopathological features of tissues from donors undergoing lumbar discectomy for LBP. Haematoxylin and eosin staining of these tissues revealed significant alterations in the morphological and cellular architecture of IVD tissue marked by the presence of known features of degeneration such as the formation of cell clusters, loss of eosin staining and the presence of fissures (Fig. ) . We also noted that the severity of degeneration was highly variable, with some individuals exhibiting more exacerbated degenerative features than others. Mildly degenerated IVD tissues had smaller cell clusters (2–3 cells), and narrow fissures, whereas severely degenerated tissues exhibited very large cell clusters (6 + cells), wider fissures, more pronounced loss of eosin staining and presence of red blood cells and small vessels indicative of neovascularisation (Fig. ). These observations demonstrated that IVD tissues from donors undergoing surgery to treat LBP display varying degrees of microscopic degenerative changes that cannot be detected by standard diagnostic imaging techniques, highlighting the need for biomarkers that better reflect tissue changes occurring during IVD degeneration progression. Proteomic analyses show key differences in protein composition between mild and severe degenerated IVD tissues Histological assessment of degenerated IVD tissues revealed more widespread loss of eosin staining around cells in severely degenerated tissues compared to mild ones suggesting changes in ECM composition as degeneration progresses. To assess how the ECM composition changes in IVD tissues as degeneration progresses, we histologically classified excised tissues into two groups: mild degenerated (Grades 4–7, n = 18), and severe degenerated (Grades 10–12, n = 17) using the Sive et al. histological grading system . Label-free mass spectrometry analysis was then applied to samples to compare protein abundance levels between the two groups (Fig. A). A median of 294 proteins (28 proteins unique to the group) was detected in the severe degenerated tissues, while a median of 270 proteins (4 exclusively present in the group) was detected in the mild degenerated sample group. Following quality control and filtration based on 50% valid values, 119 proteins were detected in both mild and severe degenerated sample groups. No proteins were uniquely present in each group. Unsupervised dimensionality reduction with principal component analysis showed distinct clustering between mild and severe degenerated samples suggesting differences in the proteomes of the two groups (Fig. B). Differential expression analysis revealed that the abundance of 31 proteins (adjusted_p < 0.05; Log 2 Foldchange (LFC) = ± 1) was significantly increased in severe degenerated samples compared to mild, including IGLC6, HRG, AEBP1, C4A, TNC and COL6A3 (Fig. C, Additional File ). KEGG pathway enrichment analysis showed that the differentially distributed proteins were involved in complement system activation, ECM regulation, and protein digestion and absorption (Fig. D). Altogether, these results indicate that the severity of degeneration leads to changes in IVD protein composition. To further elucidate changes in ECM protein composition in the IVD, we categorised the differentially expressed proteins between severe and mild degenerated IVD into non-matrisome, core matrisome and matrisome-associated proteins using the Naba et al. Matrisome Database . Core matrisome and matrisome-associated proteins constituted 45.16% (14 proteins: 7 core matrisome proteins and 7 matrisome-associated proteins) while non-matrisome proteins accounted for 54.84% (17 proteins) of the differentially expressed proteins (Fig. E). Pathway enrichment analyses showed that enriched non-matrisome proteins were involved in antigen-receptor-mediated signalling, apoptotic cell clearance and phagocytosis (Supplementary Fig. A). Enriched matrisome-related biological pathways confirmed that differentially expressed matrisomal proteins were involved in ECM organisation, regulation of cell adhesion, collagen fibril organisation and axon guidance suggesting that biological processes associated with ECM remodelling and maintenance in the disc are further impaired as degeneration progresses (Supplementary Fig. B). AEBP1, TNC, MGP and, COL12A1 are the most increased core matrisome proteins as IVD degeneration worsens Highly enriched core matrisome proteins in response to degeneration progression included ECM glycoproteins AEBP1, TNC, MGP, and TGFBI and collagens COL12A1, COL6A2, and COL6A3 (Fig. A). Among these, adipocyte enhancer binding protein 1 (AEBP1) was the most differentially enriched matrisome protein in severe compared to mild degenerated tissues (LFC = 2.40). In addition to AEBP1, other proteins such as tenascin C (TNC), matrix gla protein (MGP) and collagen type XII alpha 1 chain (COL12A1), also showed a significant increase in abundance in severe compared to mild degenerated tissues. On the other hand, enriched matrisome-associated proteins included ECM regulators involved in fibrinolysis such as histidine-rich glycoprotein (HRG), plasminogen (PLG) and serpin family A member 1 (SERPINA1) indicative of impaired fibrinolytic activity as degeneration worsens (Fig. B, Supplementary Fig. A). We next examined the relationship of all core matrisome protein abundances with IVD tissue degeneration grade and found that protein levels of AEBP1, TNC, MGP and COL12A1 were the most significantly different between mild and severe degenerate tissues (Fig. C). These observations highlighted the potential for these proteins as markers for IVD degeneration progression. To confirm this, we performed receiver operating characteristics curve analyses on core matrisome and matrisome-associated protein intensities (Fig. D, Supplementary Fig. A). From the core matrisome proteins, AEBP1 had the highest area under the curve (AUC) score of 0.768 indicating that it was the most accurate at distinguishing between mild and severe degenerate tissue in comparison to TNC (0.739), MGP (0.748) and COL12A1 (0.742). In summary, this data emphasizes changes in ECM-related proteins in the pathogenesis of IVD degeneration and highlights their potential as biomarkers for discriminating between severe and mild cases of IVD degeneration. AEBP1 is a tissue biomarker for degeneration progression in the intervertebral disc As AEBP1 was the most significantly changed core matrisome protein with the highest AUC score, we further evaluated its effectiveness as a distinguishing marker between mild and severe IVD degeneration. Immunohistochemical staining revealed increased AEBP1 staining intensity in severe degenerate tissues in comparison to mild with the more extensive protein expression observed within the large degeneration-association cell clusters in severe degenerate tissues (Fig. A). AEBP1 RNA expression levels were also significantly increased in primary cells from severe degenerate IVD tissues in comparison to mild confirming that AEBP1 is altered at both protein and gene expression level (Fig. B). Interestingly, AEBP1 protein or gene expression levels showed no association with age (Fig. C-E), suggesting that changes in AEBP1 in the IVD are associated with the severity of degeneration and not influenced by age, thus AEBP1 could potentially serve as a marker for mild versus severe degeneration progression regardless of age. High levels of complement system proteins are associated with increased severity of IVD degeneration Degeneration-associated changes in IVD tissues such as matrix degradation and cell apoptosis have been previously described to activate the complement system resulting in increased inflammation, neovascularisation and further tissue damage . In this study, we found that complement system-related proteins were highly enriched in severe versus mild IVD tissues (Figs. D and A). The classical complement pathway component, C4A, was markedly increased in severe IVD tissues in comparison to mild. Similarly, alternative pathway proteins C3, CFH, and CFB were also significantly upregulated in severe compared to mild degenerated IVD (Fig. B). Strong positive correlations with AEBP1 were observed for all components indicating an association between AEBP1 levels, complement system activation and IVD degeneration progression (Fig. C). In summary, our data suggests that changes in both matrisome and non-matrisome related proteins in the IVD can be used simultaneously to monitor IVD disease progression. Plasma levels of A2M, F13B, MMP2, and IGF1 are correlated with histological grades of IVD degeneration While the identified tissue biomarkers offer a high correlation with degeneration severity, IVD tissue is not easily accessible and can only be obtained through highly invasive surgical techniques. Thus, to identify biomarkers that distinguish between mild and severe IVD degeneration and are also accessible with minimally invasive techniques, we performed mass spectrometry analyses on 35 blood plasma samples collected from the same individuals as the IVD tissues. Principal component analysis showed no distinct clustering between plasma samples from individuals with mild and severe degeneration plasma samples with both PC1 and PC2 explaining only 25.2% of the variance (Fig. A). Following data filtering based on 50% valid values, 277 proteins were detected in both groups. Differential expression analyses were applied to these to identify systemic changes as degeneration progressed. Only 5 proteins, A2M, F13B, HSPG2, MMP2, and IGF1, had a log 2 fold change above 0.5 and a significance level less than 0.05 in severe compared to mild samples (Fig. B, Additional File ). 21 of the 31 proteins differentially expressed in IVD tissue were also detected in plasma but no significant changes were observed in plasma for these proteins (Supplementary Table ). F13B (coagulation factor XIII B chain), a zymogen involved in blood coagulation, was the most decreased protein plasma from donors with severe degeneration in comparison to mild (LFC= -0.95). A2M (alpha-2-macroglobulin), a protease inhibitor, was also reduced in donors with severe degeneration (LFC= -0.79). Conversely, MMP2 (matrix metalloproteinase-2), a zinc-dependent endopeptidase, was the most enriched (LFC = 0.70) followed by IGF1 (insulin-like growth factor 1) (LFC = 0.56) (Fig. B). To investigate the potential use of A2M, F13B, MMP2, and IGF1 as biomarkers for IVD degeneration progression, we performed correlation analyses between these proteins and histological grades of degeneration. A2M and F13B had a moderate negative correlation with histological grade while MMP2 and IGF1 showed a moderate positive correlation with histological grade (Fig. C). ROC curve analyses revealed that A2M has the highest AUC score of 0.79, demonstrating its high accuracy in distinguishing between mild and severe IVD degeneration in donors when compared to F13B, MMP2 and IGF1 (Fig. D). This suggested that A2M could be a potential plasma biomarker for IVD degeneration progression. Plasma levels of A2M show weak correlations with proteins that are altered in IVD tissue as degeneration progresses To determine if changes in plasma levels of A2M were related to those observed in the IVD tissue, we initially checked whether A2M was also present in our tissue data. We found that tissue A2M levels were increased in response to degeneration severity, showing an opposite trend to the one in plasma (Fig. E). Tissue A2M log 2 intensities also exhibited a weak correlation with plasma A2M log 2 abundancies indicating a poor relationship between the two (Fig. F). Additionally, we found that plasma A2M had no significant correlations with the 31 differentially distributed proteins in IVD tissue, including core matrisome AEBP1 (Fig. G, Additional File ). In summary, these results suggest a poor alignment between plasma and tissue protein changes in donors with mild and severe disc degeneration. Given the complexity of systemic plasma changes, a larger sample cohort may be required to improve the identification of plasma proteins related to the progression of IVD degeneration. Histological characterisation of IVD tissue excised during discectomy can provide insight into structural changes associated with the progression of degeneration not normally detected by other imaging techniques such as MRI. We examined the histopathological features of tissues from donors undergoing lumbar discectomy for LBP. Haematoxylin and eosin staining of these tissues revealed significant alterations in the morphological and cellular architecture of IVD tissue marked by the presence of known features of degeneration such as the formation of cell clusters, loss of eosin staining and the presence of fissures (Fig. ) . We also noted that the severity of degeneration was highly variable, with some individuals exhibiting more exacerbated degenerative features than others. Mildly degenerated IVD tissues had smaller cell clusters (2–3 cells), and narrow fissures, whereas severely degenerated tissues exhibited very large cell clusters (6 + cells), wider fissures, more pronounced loss of eosin staining and presence of red blood cells and small vessels indicative of neovascularisation (Fig. ). These observations demonstrated that IVD tissues from donors undergoing surgery to treat LBP display varying degrees of microscopic degenerative changes that cannot be detected by standard diagnostic imaging techniques, highlighting the need for biomarkers that better reflect tissue changes occurring during IVD degeneration progression. Histological assessment of degenerated IVD tissues revealed more widespread loss of eosin staining around cells in severely degenerated tissues compared to mild ones suggesting changes in ECM composition as degeneration progresses. To assess how the ECM composition changes in IVD tissues as degeneration progresses, we histologically classified excised tissues into two groups: mild degenerated (Grades 4–7, n = 18), and severe degenerated (Grades 10–12, n = 17) using the Sive et al. histological grading system . Label-free mass spectrometry analysis was then applied to samples to compare protein abundance levels between the two groups (Fig. A). A median of 294 proteins (28 proteins unique to the group) was detected in the severe degenerated tissues, while a median of 270 proteins (4 exclusively present in the group) was detected in the mild degenerated sample group. Following quality control and filtration based on 50% valid values, 119 proteins were detected in both mild and severe degenerated sample groups. No proteins were uniquely present in each group. Unsupervised dimensionality reduction with principal component analysis showed distinct clustering between mild and severe degenerated samples suggesting differences in the proteomes of the two groups (Fig. B). Differential expression analysis revealed that the abundance of 31 proteins (adjusted_p < 0.05; Log 2 Foldchange (LFC) = ± 1) was significantly increased in severe degenerated samples compared to mild, including IGLC6, HRG, AEBP1, C4A, TNC and COL6A3 (Fig. C, Additional File ). KEGG pathway enrichment analysis showed that the differentially distributed proteins were involved in complement system activation, ECM regulation, and protein digestion and absorption (Fig. D). Altogether, these results indicate that the severity of degeneration leads to changes in IVD protein composition. To further elucidate changes in ECM protein composition in the IVD, we categorised the differentially expressed proteins between severe and mild degenerated IVD into non-matrisome, core matrisome and matrisome-associated proteins using the Naba et al. Matrisome Database . Core matrisome and matrisome-associated proteins constituted 45.16% (14 proteins: 7 core matrisome proteins and 7 matrisome-associated proteins) while non-matrisome proteins accounted for 54.84% (17 proteins) of the differentially expressed proteins (Fig. E). Pathway enrichment analyses showed that enriched non-matrisome proteins were involved in antigen-receptor-mediated signalling, apoptotic cell clearance and phagocytosis (Supplementary Fig. A). Enriched matrisome-related biological pathways confirmed that differentially expressed matrisomal proteins were involved in ECM organisation, regulation of cell adhesion, collagen fibril organisation and axon guidance suggesting that biological processes associated with ECM remodelling and maintenance in the disc are further impaired as degeneration progresses (Supplementary Fig. B). Highly enriched core matrisome proteins in response to degeneration progression included ECM glycoproteins AEBP1, TNC, MGP, and TGFBI and collagens COL12A1, COL6A2, and COL6A3 (Fig. A). Among these, adipocyte enhancer binding protein 1 (AEBP1) was the most differentially enriched matrisome protein in severe compared to mild degenerated tissues (LFC = 2.40). In addition to AEBP1, other proteins such as tenascin C (TNC), matrix gla protein (MGP) and collagen type XII alpha 1 chain (COL12A1), also showed a significant increase in abundance in severe compared to mild degenerated tissues. On the other hand, enriched matrisome-associated proteins included ECM regulators involved in fibrinolysis such as histidine-rich glycoprotein (HRG), plasminogen (PLG) and serpin family A member 1 (SERPINA1) indicative of impaired fibrinolytic activity as degeneration worsens (Fig. B, Supplementary Fig. A). We next examined the relationship of all core matrisome protein abundances with IVD tissue degeneration grade and found that protein levels of AEBP1, TNC, MGP and COL12A1 were the most significantly different between mild and severe degenerate tissues (Fig. C). These observations highlighted the potential for these proteins as markers for IVD degeneration progression. To confirm this, we performed receiver operating characteristics curve analyses on core matrisome and matrisome-associated protein intensities (Fig. D, Supplementary Fig. A). From the core matrisome proteins, AEBP1 had the highest area under the curve (AUC) score of 0.768 indicating that it was the most accurate at distinguishing between mild and severe degenerate tissue in comparison to TNC (0.739), MGP (0.748) and COL12A1 (0.742). In summary, this data emphasizes changes in ECM-related proteins in the pathogenesis of IVD degeneration and highlights their potential as biomarkers for discriminating between severe and mild cases of IVD degeneration. As AEBP1 was the most significantly changed core matrisome protein with the highest AUC score, we further evaluated its effectiveness as a distinguishing marker between mild and severe IVD degeneration. Immunohistochemical staining revealed increased AEBP1 staining intensity in severe degenerate tissues in comparison to mild with the more extensive protein expression observed within the large degeneration-association cell clusters in severe degenerate tissues (Fig. A). AEBP1 RNA expression levels were also significantly increased in primary cells from severe degenerate IVD tissues in comparison to mild confirming that AEBP1 is altered at both protein and gene expression level (Fig. B). Interestingly, AEBP1 protein or gene expression levels showed no association with age (Fig. C-E), suggesting that changes in AEBP1 in the IVD are associated with the severity of degeneration and not influenced by age, thus AEBP1 could potentially serve as a marker for mild versus severe degeneration progression regardless of age. Degeneration-associated changes in IVD tissues such as matrix degradation and cell apoptosis have been previously described to activate the complement system resulting in increased inflammation, neovascularisation and further tissue damage . In this study, we found that complement system-related proteins were highly enriched in severe versus mild IVD tissues (Figs. D and A). The classical complement pathway component, C4A, was markedly increased in severe IVD tissues in comparison to mild. Similarly, alternative pathway proteins C3, CFH, and CFB were also significantly upregulated in severe compared to mild degenerated IVD (Fig. B). Strong positive correlations with AEBP1 were observed for all components indicating an association between AEBP1 levels, complement system activation and IVD degeneration progression (Fig. C). In summary, our data suggests that changes in both matrisome and non-matrisome related proteins in the IVD can be used simultaneously to monitor IVD disease progression. While the identified tissue biomarkers offer a high correlation with degeneration severity, IVD tissue is not easily accessible and can only be obtained through highly invasive surgical techniques. Thus, to identify biomarkers that distinguish between mild and severe IVD degeneration and are also accessible with minimally invasive techniques, we performed mass spectrometry analyses on 35 blood plasma samples collected from the same individuals as the IVD tissues. Principal component analysis showed no distinct clustering between plasma samples from individuals with mild and severe degeneration plasma samples with both PC1 and PC2 explaining only 25.2% of the variance (Fig. A). Following data filtering based on 50% valid values, 277 proteins were detected in both groups. Differential expression analyses were applied to these to identify systemic changes as degeneration progressed. Only 5 proteins, A2M, F13B, HSPG2, MMP2, and IGF1, had a log 2 fold change above 0.5 and a significance level less than 0.05 in severe compared to mild samples (Fig. B, Additional File ). 21 of the 31 proteins differentially expressed in IVD tissue were also detected in plasma but no significant changes were observed in plasma for these proteins (Supplementary Table ). F13B (coagulation factor XIII B chain), a zymogen involved in blood coagulation, was the most decreased protein plasma from donors with severe degeneration in comparison to mild (LFC= -0.95). A2M (alpha-2-macroglobulin), a protease inhibitor, was also reduced in donors with severe degeneration (LFC= -0.79). Conversely, MMP2 (matrix metalloproteinase-2), a zinc-dependent endopeptidase, was the most enriched (LFC = 0.70) followed by IGF1 (insulin-like growth factor 1) (LFC = 0.56) (Fig. B). To investigate the potential use of A2M, F13B, MMP2, and IGF1 as biomarkers for IVD degeneration progression, we performed correlation analyses between these proteins and histological grades of degeneration. A2M and F13B had a moderate negative correlation with histological grade while MMP2 and IGF1 showed a moderate positive correlation with histological grade (Fig. C). ROC curve analyses revealed that A2M has the highest AUC score of 0.79, demonstrating its high accuracy in distinguishing between mild and severe IVD degeneration in donors when compared to F13B, MMP2 and IGF1 (Fig. D). This suggested that A2M could be a potential plasma biomarker for IVD degeneration progression. To determine if changes in plasma levels of A2M were related to those observed in the IVD tissue, we initially checked whether A2M was also present in our tissue data. We found that tissue A2M levels were increased in response to degeneration severity, showing an opposite trend to the one in plasma (Fig. E). Tissue A2M log 2 intensities also exhibited a weak correlation with plasma A2M log 2 abundancies indicating a poor relationship between the two (Fig. F). Additionally, we found that plasma A2M had no significant correlations with the 31 differentially distributed proteins in IVD tissue, including core matrisome AEBP1 (Fig. G, Additional File ). In summary, these results suggest a poor alignment between plasma and tissue protein changes in donors with mild and severe disc degeneration. Given the complexity of systemic plasma changes, a larger sample cohort may be required to improve the identification of plasma proteins related to the progression of IVD degeneration. Several studies have assessed changes in the proteome and transcriptome of non-degenerate and degenerated IVD tissues from adult humans ; however, investigations aimed at characterising pathophysiological changes at different stages of degeneration progression and how these affect ECM as well as blood protein composition have been limited. In this study, we histologically analysed and graded IVD tissue from donors undergoing discectomy surgery for the treatment of low back pain and found that some of the tissues exhibited more severe histological features of disc degeneration . More severely degenerated tissues exhibited larger cell clusters and extensive loss of pericellular ECM compared to mild degenerated tissues. Increased number and size of cell clusters in the IVD are a hallmark of advancing disc degeneration in humans . Moreover, cell clusters were found to be associated with increased MMP1 and loss of proteoglycans in severely degenerated discs in comparison to mild degenerated discs suggesting that changes in the cellular landscape of the IVD may influence ECM composition as degeneration progresses . Despite this, differences in ECM protein composition in mild and severe degenerated tissues have not been fully defined. We characterised the matrisome profile of severe degenerated IVDs in comparison to mild to identify ECM proteins associated with degeneration progression. We observed a shift to a more profibrotic matrix environment in severe degenerated IVD tissues characterised by high levels of collagens, COL12A1, COL6A2, and COL6A3 as well as ECM glycoproteins, AEBP1, TNC, MGP, and TGFBI, all which are associated with increased fibrosis in the IVD and other tissues . Our results showed that AEBP1, TNC, MGP, and COL12A1 were the most increased core matrisome proteins as degeneration progressed. We thus evaluated the potential of these 4 proteins to be used as tissue biomarkers for distinguishing progressing stages of degeneration. ROC curve analyses showed that AEBP1, TNC, MGP and COL12A1 accurately differentiated between mild and severe degenerated IVD tissues, highlighting the potential of these proteins as a panel of biomarkers for degeneration progression. AEBP1 was the most differentially distributed ECM protein between mild and severe degenerated IVD tissues, exhibiting the highest sensitivity and specificity as a biomarker for degeneration progression. AEBP1 is a protein encoded by the gene AEBP1 , existing in two functionally distinct isoforms ( Q8IUX7-1 and Q8IUX7-2 , UniProt, release 2024_04). AEBP1 isoform 1, also known as aortic carboxypeptidase-like protein (ACLP), is described as a secreted 1158 amino acid long protein which consists of an N-terminal signalling peptide, a lysine-proline-serine rich region, a collagen-binding discoidin domain and a carboxypeptidase-like domain . AEBP1 isoform 1 has been linked to the ECM and is increased during vascular smooth muscle cell differentiation . In addition, AEBP1 isoform 1 is highly expressed in collagen-rich tissues such as the skin, blood vessels, liver, lung, and IVD . AEBP1 isoform 2 is a truncated version lacking the long N-terminal region in isoform 1. This isoform is described as a transcriptional repressor expressed in the nuclear region of adipocytes and osteoclasts . Our immunohistochemical analyses showed that AEBP1 staining was localised within the cytoplasm and in the pericellular and extracellular matrices but not in the nuclei suggesting that the isoform detected in this study is likely to be isoform 1 rather than isoform 2. Changes in AEBP1 RNA levels have been previously described in the disc. A recent study demonstrated increased AEBP1 gene expression in Pfirmann-graded late IVD degeneration compared to early IVD degeneration in human tissues . Similarly, we found that cells from severe degenerated IVD tissues expressed higher levels of AEBP1 in comparison to mild. Altogether, these findings validate that AEBP1 is increased at both gene expression and protein levels as degeneration progresses. Increased levels of AEBP1 have been previously implicated in other degeneration-related conditions. Higher AEBP1 levels were found in the articular cartilage of donors with osteoarthritis in comparison to normal counterparts. AEBP1 knockdown in mouse models of osteoarthritis revealed that loss of AEBP1 reversed degeneration association inflammation and ECM degradation . Another study found that ACLP/AEBP1 was elevated in human fibrotic lung tissue, with AEBP1 knockout mice exhibiting fewer myofibroblasts and less collagen in the lung following bleomycin injury in comparison to wild-type mice . Furthermore, AEBP1 gene expression levels were increased in severe liver fibrosis compared to normal liver . Conversely, loss of AEBP1 in mice was found to be detrimental to wound healing progression where regulation of ECM organisation is key for proliferative and remodelling phases of healing . In summary, these studies highlight that normal levels of AEBP1 are important in the maintenance of the ECM microenvironment and increased levels drive ECM degradation and fibrosis, both of which are known features of degenerating tissues including the IVD. Another well-documented feature of IVD degeneration is increased neovascularisation . AEBP1 has been demonstrated to have proangiogenic effects in several tissues. It is upregulated during vascular smooth muscle differentiation and was found to regulate vascular adventitial progenitor differentiation following injury . AEBP1 levels were also elevated in tumour endothelial cells thereby promoting angiogenesis within the tumour microenvironment. Downregulation of AEBP1 in tumour endothelial cells in vitro reduced levels of angiogenesis-related genes, POSTN and AQP1 , suggesting that AEBP1 may regulate new blood vessel formation . Based on these findings, AEBP1 may also promote neovascularisation during degenerative disc disease; however, further research is required to elucidate the specific role of AEBP1 in IVD neovascularisation and to assess its potential as a therapeutic target for degenerative disc disease. In addition to AEBP1, other matrisome-associated proteins previously shown to regulate neovascularisation in disease, such as PLG, HRG, LOX, SERPINA3 and, CLEC11A were also found in higher levels in severely degenerated tissues . This data supports that blood vessel formation and remodelling may be further enhanced as degeneration progresses, although again further work is required to confirm the presence and association of these matrisome proteins with the vasculature or endothelial cells in the degenerated IVD tissues. We also showed that high AEBP1 protein levels were strongly correlated with an increased abundance of complement system proteins, including, C4A, C3, CFB and CFH. While AEBP1 has not been previously shown to regulate the complement system, a study in glioblastoma tissues also found that high AEBP1 expression levels were associated with enrichment for complement and coagulation cascade pathway-related genes . Similar to our findings, this observation suggested a possible direct or indirect association between AEBP1 and the complement system in disease. A few recent studies have also reported increased levels of complement system proteins and genes in degenerated IVD tissues . Complement pathway proteins are predominantly synthesised in the liver and circulate in the blood. The lack of alteration in complement pathway proteins in the plasma proteome in this study, therefore, suggests that elevated tissue complement proteins are due to increased deposition or retention of these proteins in the damaged IVD. However, it remains unclear whether this complement pathway protein deposition is a result of, or is a contributor to increased degeneration in the disc (or both) . Complement pathway activation has been shown to regulate angiogenesis in pathology . As such, it is possible that AEBP1 and complement components act in synergy to promote angiogenesis, thereby contributing to degeneration. On the other hand, the complement system is activated in response to tissue damage and plays a major role in the clearance of apoptotic cells . Apoptotic cells are increased in degenerated IVD tissues which could result in increased activation of the complement pathway as degeneration worsens . However, deposited complement proteins also further recruit and activate other immune cells, including monocytes and macrophages which can release proteases and cause further damage to ECM-rich structures which may result in further activation of the complement pathway proteins through feedback regulation hence their high abundance as degeneration progresses . Our data suggests that AEBP1 may act as a tissue biomarker for monitoring degeneration progression in the tissue. However, IVD tissue is not readily accessible and is often obtained through highly invasive post-surgical procedures meaning its efficacy as a diagnostic tool is limited. Proteins identified in more accessible specimens such as urine or blood would make better biomarkers for monitoring IVD degeneration progression. We integrated proteome data from matched tissue and plasma samples from donors with severe and mild IVD degeneration to identify associated plasma biomarkers. We found that protein levels of A2M, a protease inhibitor, were differentially distributed in the plasma of donors with severe IVD degeneration compared to mild. Previous studies have shown that alterations in A2M may affect IVD function. For example, reduced A2M levels were found in severe degenerated NP tissues expressing high levels of reactive oxygen species and the addition of exogenous A2M reduced levels of oxygen reactive species in cultured NP, suggesting that A2M plays an antioxidative role in the IVD . A2M also inhibited inflammation and reduced expression of degradation enzymes in human chondrocytes, but increased levels of protective matrix genes such as aggrecan in vitro . Furthermore, the injection of autologous A2M was found to alleviate discogenic back pain in humans . These studies demonstrate that A2M may play a protective role in the IVD and alteration in levels may impair disc function. Therefore, it is possible that reduced plasma levels of A2M in donors with severe IVD degeneration may be linked to its pathogenesis and progression. Nonetheless, plasma A2M had weak associations with proteins altered in IVD tissues. This indicated that A2M could not be used as a sole plasma biomarker for IVD degeneration progression despite a moderate AUC score of 0.78. A2M can potentially be combined with other parameters, such as age, body mass index, tissue markers, or other routinely monitored blood markers, to improve its sensitivity in predicting degeneration progression. A recent study demonstrated that a combination of age, C-reactive protein and CCL22 plasma levels could efficiently predict the recovery of patients who underwent spine surgery to treat disc degeneration . However, sensitivity was reduced when these markers were used individually implying that plasma biomarkers for IVD degeneration are more efficient when combined with other parameters. The weak relationship observed between plasma and tissue protein levels may also be due to small cohort sizes used here. A larger cohort size may be necessary to fully characterise changes in plasma levels of A2M in relation to disc degeneration, although a recent study which analysed 100 serum samples from subjects with and without modic changes found no correlation between serum protein levels and modic changes . Altogether, these results suggest that much larger population studies are required to fully characterise the relationship between blood biomarkers and IVD degeneration. In summary, our data demonstrates that ECM protein composition is dysregulated as IVD degeneration worsens. We show elevated protein levels of AEBP1, a collagen-binding protein that has been previously implicated in increased fibrosis, angiogenesis and ECM degradation, in severe degenerate tissues in comparison to mild tissues. We further suggest that AEBP1 is a potential tissue biomarker for monitoring degeneration progression. However, histologically observed tissue changes in the disc need to be integrated with non-invasive methods of evaluating IVD degeneration such as blood biomarkers, to aid in the diagnosis, monitoring, prognosis and treatment of the disease. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2: Title of the data : Differentially distributed proteins from mass spectrometry analyses of severe and mild degenerated intervertebral disc tissues. Description of the data : List of Log2 LFQ protein intensities and differentially distributed proteins obtained from label-free mass spectrometry analyses of severe and mild degenerated intervertebral disc tissues. Supplementary Material 3: Title of the data : Differentially distributed proteins from mass spectrometry analyses of plasma from donors with severe and mild intervertebral disc degeneration. Description of the data : 1. List of Log2 protein intensities and differentially distributed proteins obtained from DIA SWATH mass spectrometry analyses of plasma from donors with severe and mild intervertebral disc degeneration. 2. Correlation analyses between plasma A2M and proteins changed in IVD tissue in response to degeneration progression. |
A novel histopathological feature of spatial tumor–stroma distribution predicts lung squamous cell carcinoma prognosis | 43f3efbc-6bbe-4cfc-a357-cb27dd4d1334 | 11531967 | Anatomy[mh] | INTRODUCTION Squamous cell carcinoma is one of the major histological types of lung cancer, along with adenocarcinoma, and accounts for 15%–20% of lung cancer cases. Although remarkable progress has been made in lung cancer treatment in recent years, , the prognosis of squamous cell carcinoma after resection remains poor. The recurrence rate of squamous cell carcinoma after surgical resection ranges from 25% to 40%. , One of the most important issues in the pathological examination of lung squamous cell carcinoma is that there are few pathological indicators that can estimate the tumor malignancy and patient outcomes. , For adenocarcinoma, prognostic markers have been described in a number of articles. , , In particular, the histological grading system based on tissue architecture has been widely acknowledged as a reliable predictor of patient prognosis and is utilized in practical pathological diagnoses. , In contrast, histological grading of squamous cell carcinoma based on the degree of keratinization does not predict patient outcomes. Several histological findings have been proposed as potential predictors. Tumor budding has been reported to be associated with a poor prognosis. , Moreover, one of our studies showed that an alveolar space‐filling non‐destructive growth pattern predicts a good prognosis in squamous cell carcinoma arising in the periphery of the lungs. However, none of these has been widely accepted as predictors of patient outcomes to date, other than conventional histological findings, including lymphovascular invasion, visceral pleural invasion, and pathological stage. , Cancer tissues are a complex mixture of cancer and stromal cells, with the stroma providing a microenvironment that supports tumor growth and resistance to treatment. , Our group has shown that stromal cells, including cancer‐associated fibroblasts (CAFs) and tumor‐associated macrophages (TAMs), play significant roles in cancer cells. , , Moreover, we demonstrated that the amount of cancer stroma in cancer tissues indicated poor patient outcomes after the resection of peripheral lung squamous cell carcinoma. Considering the relationship between cancer cells and the surrounding stroma, their spatial distribution in cancer tissues may be crucial; however, previous studies have focused solely on their quantity, with little attention given to spatial distribution. , Therefore, this study aimed to analyze the clinical relevance of the spatial distribution of cancer cells and stroma in cancer tissues. To characterize the spatial distribution of the cancer elements, we utilized texture features, referring to previous analyses in the radiology field that focused on the clinical relevance of texture features, including Shannon's entropy in lung cancer images. , , In this study, we employed a cutting‐edge approach by incorporating machine learning‐based image processing and analysis with texture features, and investigated its clinical impact in lung squamous cell carcinoma patients. MATERIALS AND METHODS 2.1 Patient selection The patient cohort and tissue specimens have been described previously. In short, 132 patients with peripheral lung squamous cell carcinoma measuring 3–5 cm who received surgical resection at the National Cancer Center Hospital East between 2002 and 2015 were enrolled. Cases that received neoadjuvant therapy were not included in this study. 2.2 Histological diagnosis Disease stages were classified according to the 8th edition of the TNM classification of malignant tumors. Histological diagnosis was made following the 5th edition of the World Health Organization series on histological classification. 2.3 Clinicopathological characteristics The clinicopathological characteristics of the patients were collected from the available medical records. The following factors were retrospectively analyzed: sex, age, smoking history, clinical stage, tumor size, pathological stage (pStage), pathological nodal status, visceral pleural invasion, lymphovascular invasion, adjuvant therapy, recurrence, and survival (Table ). Additionally, for cases in which preoperative FDG‐PET of the primary tumor was performed, the SUVmax data were analyzed. 2.4 Tissue preparation and immunohistochemistry As described previously, H&E and cytokeratin AE1/3 immunohistochemistry slides for each tumor were prepared and used in this study. Tissue slides were scanned using a NanoZoomer 2.0 system (Hamamatsu Photonics, Hamamatsu, Japan). 2.5 Image segmentation and analysis The scanned pathological images of the tumors were reviewed by two pathologists (T.T. and G.I.). Whole tumor areas were annotated on cytokeratin AE1/3 immunohistochemistry slides using QuPath® software. For the segmentation of cancer cells in AE1/3 immunohistochemistry slides, a machine learning‐based method, simple linear iterative clustering (SLIC), was utilized through the skicit‐image and openslide modules in Python 3.8.1 (Figure ). The background and air space areas in the lung tissues were also segmented by binarizing the AE1/3 immunohistochemistry images. Finally, we integrated the segmented images and annotation data to create maps depicting the whole tumor area, cancer cells, cancer stroma, normal lung tissues, air space, and background for each case (Figure ; cancer cells: red, cancer stroma: yellow, the whole tumor area: red and yellow, normal lung tissue: gray, air space and background: black). In this study, we applied a mathematical approach to precisely analyze the spatial distribution of cancer cells and stroma. Entropy is a theory used to analyze data in various fields, including information computation, biology, and ecology. Shannon's entropy is a well known index for biodiversity in ecology. The TSR and Shannon's entropy were calculated as previously described. , For pathological image analysis, spatial entropy, a modified Shannon's entropy index weighted by the Euclidean distance of the objects has been proposed and utilized to investigate the spatial distribution of immune cells in cancer tissues. , Therefore, to properly characterize the spatial positional information in pathological images, we employed this framework to analyze the spatial distribution of cancer cells and stroma. More specifically, we multiplied Shannon's entropy of cancer cells and cancer stroma by the following coefficients: the sum of the Euclidean distance among cancer cell regions (Distance tumor ) and that among stroma regions (Distance stroma ) divided by the Euclidean distance of the whole tumor area (Distance all ). We defined this index as the STSDI, and clinicopathological analysis was conducted based on the STSDI (Figure ). In our analysis, we automated STSDI calculations by inputting annotated pathological images and their associated data directly into our analytical pipeline, streamlining processing and enhancing reproducibility. 2.6 Evaluation of growth patterns and tumor budding of squamous cell carcinoma The proportion of non‐destructive growth patterns of cancer cells in lung squamous cell carcinoma was also analyzed, as previously described. , In summary, cancer cell nests display replacement growth of alveolar‐lining epithelial cells or fill the alveolar space as non‐destructive growth patterns. As previously described, small isolated tumor clusters consisting of fewer than five cancer cells in the surrounding stroma are regarded as tumor budding. The proportions of non‐destructive and destructive growth patterns and the presence of tumor budding in each tumor were analyzed by two pathologists (T.T. and G.I.). 2.7 Statistical analysis Recurrence‐free survival (RFS), disease‐specific survival (DSS), and overall survival (OS) were analyzed using log‐rank tests. Survival curves were plotted using the Kaplan–Meier method. For the predictors of RFS and DSS, univariate and multivariate analyses were performed using the Cox proportional hazards regression model to calculate the 95% confidence intervals and hazard ratios (95% CI, HR). Two‐category comparisons were conducted employing either the chi‐square test, Fisher's exact test, or the Mann–Whitney U ‐test ( p < 0.05). All statistical analyses were performed using Python 3‐8‐1. 2.8 Ethical considerations This study was approved by the National Cancer Ethical Review Board (IRB approval number: 2019–091), and was performed in accordance with the tenets of the Declaration of Helsinki. Patient selection The patient cohort and tissue specimens have been described previously. In short, 132 patients with peripheral lung squamous cell carcinoma measuring 3–5 cm who received surgical resection at the National Cancer Center Hospital East between 2002 and 2015 were enrolled. Cases that received neoadjuvant therapy were not included in this study. Histological diagnosis Disease stages were classified according to the 8th edition of the TNM classification of malignant tumors. Histological diagnosis was made following the 5th edition of the World Health Organization series on histological classification. Clinicopathological characteristics The clinicopathological characteristics of the patients were collected from the available medical records. The following factors were retrospectively analyzed: sex, age, smoking history, clinical stage, tumor size, pathological stage (pStage), pathological nodal status, visceral pleural invasion, lymphovascular invasion, adjuvant therapy, recurrence, and survival (Table ). Additionally, for cases in which preoperative FDG‐PET of the primary tumor was performed, the SUVmax data were analyzed. Tissue preparation and immunohistochemistry As described previously, H&E and cytokeratin AE1/3 immunohistochemistry slides for each tumor were prepared and used in this study. Tissue slides were scanned using a NanoZoomer 2.0 system (Hamamatsu Photonics, Hamamatsu, Japan). Image segmentation and analysis The scanned pathological images of the tumors were reviewed by two pathologists (T.T. and G.I.). Whole tumor areas were annotated on cytokeratin AE1/3 immunohistochemistry slides using QuPath® software. For the segmentation of cancer cells in AE1/3 immunohistochemistry slides, a machine learning‐based method, simple linear iterative clustering (SLIC), was utilized through the skicit‐image and openslide modules in Python 3.8.1 (Figure ). The background and air space areas in the lung tissues were also segmented by binarizing the AE1/3 immunohistochemistry images. Finally, we integrated the segmented images and annotation data to create maps depicting the whole tumor area, cancer cells, cancer stroma, normal lung tissues, air space, and background for each case (Figure ; cancer cells: red, cancer stroma: yellow, the whole tumor area: red and yellow, normal lung tissue: gray, air space and background: black). In this study, we applied a mathematical approach to precisely analyze the spatial distribution of cancer cells and stroma. Entropy is a theory used to analyze data in various fields, including information computation, biology, and ecology. Shannon's entropy is a well known index for biodiversity in ecology. The TSR and Shannon's entropy were calculated as previously described. , For pathological image analysis, spatial entropy, a modified Shannon's entropy index weighted by the Euclidean distance of the objects has been proposed and utilized to investigate the spatial distribution of immune cells in cancer tissues. , Therefore, to properly characterize the spatial positional information in pathological images, we employed this framework to analyze the spatial distribution of cancer cells and stroma. More specifically, we multiplied Shannon's entropy of cancer cells and cancer stroma by the following coefficients: the sum of the Euclidean distance among cancer cell regions (Distance tumor ) and that among stroma regions (Distance stroma ) divided by the Euclidean distance of the whole tumor area (Distance all ). We defined this index as the STSDI, and clinicopathological analysis was conducted based on the STSDI (Figure ). In our analysis, we automated STSDI calculations by inputting annotated pathological images and their associated data directly into our analytical pipeline, streamlining processing and enhancing reproducibility. Evaluation of growth patterns and tumor budding of squamous cell carcinoma The proportion of non‐destructive growth patterns of cancer cells in lung squamous cell carcinoma was also analyzed, as previously described. , In summary, cancer cell nests display replacement growth of alveolar‐lining epithelial cells or fill the alveolar space as non‐destructive growth patterns. As previously described, small isolated tumor clusters consisting of fewer than five cancer cells in the surrounding stroma are regarded as tumor budding. The proportions of non‐destructive and destructive growth patterns and the presence of tumor budding in each tumor were analyzed by two pathologists (T.T. and G.I.). Statistical analysis Recurrence‐free survival (RFS), disease‐specific survival (DSS), and overall survival (OS) were analyzed using log‐rank tests. Survival curves were plotted using the Kaplan–Meier method. For the predictors of RFS and DSS, univariate and multivariate analyses were performed using the Cox proportional hazards regression model to calculate the 95% confidence intervals and hazard ratios (95% CI, HR). Two‐category comparisons were conducted employing either the chi‐square test, Fisher's exact test, or the Mann–Whitney U ‐test ( p < 0.05). All statistical analyses were performed using Python 3‐8‐1. Ethical considerations This study was approved by the National Cancer Ethical Review Board (IRB approval number: 2019–091), and was performed in accordance with the tenets of the Declaration of Helsinki. RESULTS 3.1 Patient characteristics The study cohort (Table ) included 116 males and 16 females with a mean age of 71 years (range: 54–86 years). All patients received lobectomy and pneumonectomy, and 28 received adjuvant therapy. The follow‐up period ranged from 50 to 116 months (median follow‐up period for surviving patients: 76 months). 3.2 Histological images and STSDI values Representative H&E‐stained images, AE1/3 immunohistochemistry images, and segmented region maps are shown in Figure . The STSDI values were 0.173 and 0.129 for cases 59 and 92, respectively (Figure : Case 59, 2 B: Case 92). Because of the nature of the calculation formula, STSDI and TSR were strongly related, and the scatter plot exhibited a distribution that closely followed a parabolic shape (Figure ). Although their Shannon entropy values were the same when the tumors had the same TSR (Figure ), even with similar TSR, the STSDI values were different between Case 107 (Figure ; STSDI: 0.169) and Case 115 (Figure ; STSDI: 0.156) (Figure ). In Case 59, the presence of dilated air spaces within the tumor (depicted as black in the segmentation image) may initially obscure interpretation. However, both Case 59 and Case 107 demonstrate a widespread distribution of tumor and stromal regions, characterized by a notable intermingling pattern. Consequently, both cases exhibit elevated STSDI values. We defined STSDI‐high tumor as a STSDI value > median STSDI value (0.1606), and STSDI‐low tumor as STSDI value ≦ median STSDI value (Figure ). Further analysis of the two groups was performed to determine the clinical significance of the STSDI in this cohort. Similarly, tumors with Shannon's entropy > the median Shannon's entropy value were classified as Shannon's entropy high, whereas the others were classified as Shannon's entropy low. “In our analysis, we automated the spatial distribution index (STSDI) calculations by inputting annotated pathological images and their associated data directly into our analytical pipeline, streamlining processing and enhancing reproducibility.” In our analysis, we automated the spatial distribution index (STSDI) calculations by inputting annotated pathological images and their associated data directly into our analytical pipeline, streamlining processing and enhancing reproducibility. 3.3 Clinicopathological characteristics of STSDI‐high and ‐low groups The clinicopathological characteristics of patients in the STSDI‐high and STSDI‐low groups are shown in Table . Overall, these two groups showed no significant differences in clinicopathological factors, including sex, age, smoking history, clinical stage, tumor size, pathological nodal status, pathological stage, lymphovascular invasion, visceral pleural invasion, tumor budding, or adjuvant therapy. Similarly, the analysis revealed no statistical difference in FDG‐PET SUVmax values between the groups (Figure ). 3.4 Correlations between STSDI and patient survival To elucidate the clinical significance of the spatial distribution of cancer cells and stroma in squamous cell carcinoma, we analyzed the correlations between STSDI and patient outcomes. Log‐rank analysis showed that the STSDI‐low group had a significantly shorter RFS than the STSDI‐high group (Figure ). The 5‐years RFS rates were 49.5% in the STSDI‐low group and 77.0% in the STSDI‐high group. Furthermore, disease‐specific, and OS were significantly shorter in STSDI‐low tumors than in STSDI‐high tumors (Figure ; 5‐years DSS: 53.6% vs. 81.5%, 5‐years OS: 39.0% vs. 67.0%). In contrast, there were no significant differences in RFS, DSS, and OS between the Shannon's entropy high and entropy‐low groups (Figure ). In addition, although there were no significant differences in RFS, DSS, and OS between the STSDI‐high and ‐low groups in pStage I, STSDI‐low tumors had significantly shorter RFS, DSS, and OS than STSDI‐high tumors in pStage II and III (Figure ). 3.5 Univariate and multivariate Cox regression analysis for patient survival To investigate the impact of the STSDI and other clinicopathological characteristics on patient outcomes in detail, COX regression analysis was performed (Tables and ). Univariate analysis showed that low STSDI in addition to pStage and adjuvant therapy was an unfavorable factor for RFS ( p < 0.005, HR 2.894 (1.552–5.396)). Moreover, multivariate analysis revealed that a low STSDI was an independent unfavorable factor ( p < 0.005, HR 2.668 (1.413–5.038)) as well as pStage ( p < 0.005, HR 4.638 (2.369–9.080)). In addition, STSDI, visceral pleural invasion, and pStage were found to be unfavorable factors for disease‐specific survival in the univariate analysis, and multivariate analysis showed that they were independent predictors of disease‐specific death (STSDI: p < 0.005, HR 3.057 (1.470–6.360), pleural invasion: p = 0.010, HR 2.360 (1.225–4.549), pStage: p < 0.005, HR 4.964 (2.432–10.132)). 3.6 Growth patterns of squamous cell carcinoma and STSDI Considering the histopathological aspects, it is conjectured that the spatial distribution of cancer cells and stroma reflects the growth patterns of cancer cells. As shown in Figure , Case 1 had a large area of non‐destructive growth patterns in the tumor (non‐destructive growth patterns: 40%) and exhibited a high STSDI of 0.165. However, even with a similar TSR in Case 88 shown in Figure , the whole tumor consisted of destructive growth patterns of cancer cells with fibrotic stroma (non‐destructive growth patterns: 0%), and the STSDI of this tumor was 0.156. For statistical analysis, the cases were divided into two categories based on the proportion of non‐destructive growth patterns: non‐destructive growth‐low, where the proportion was 20% or below, and non‐destructive growth‐high, where it exceeded 20%. The results showed that the former group, with a smaller proportion of non‐destructive growth patterns, had a significantly lower STSDI than the latter group, with a larger proportion of non‐destructive growth patterns (Figure ). Patient characteristics The study cohort (Table ) included 116 males and 16 females with a mean age of 71 years (range: 54–86 years). All patients received lobectomy and pneumonectomy, and 28 received adjuvant therapy. The follow‐up period ranged from 50 to 116 months (median follow‐up period for surviving patients: 76 months). Histological images and STSDI values Representative H&E‐stained images, AE1/3 immunohistochemistry images, and segmented region maps are shown in Figure . The STSDI values were 0.173 and 0.129 for cases 59 and 92, respectively (Figure : Case 59, 2 B: Case 92). Because of the nature of the calculation formula, STSDI and TSR were strongly related, and the scatter plot exhibited a distribution that closely followed a parabolic shape (Figure ). Although their Shannon entropy values were the same when the tumors had the same TSR (Figure ), even with similar TSR, the STSDI values were different between Case 107 (Figure ; STSDI: 0.169) and Case 115 (Figure ; STSDI: 0.156) (Figure ). In Case 59, the presence of dilated air spaces within the tumor (depicted as black in the segmentation image) may initially obscure interpretation. However, both Case 59 and Case 107 demonstrate a widespread distribution of tumor and stromal regions, characterized by a notable intermingling pattern. Consequently, both cases exhibit elevated STSDI values. We defined STSDI‐high tumor as a STSDI value > median STSDI value (0.1606), and STSDI‐low tumor as STSDI value ≦ median STSDI value (Figure ). Further analysis of the two groups was performed to determine the clinical significance of the STSDI in this cohort. Similarly, tumors with Shannon's entropy > the median Shannon's entropy value were classified as Shannon's entropy high, whereas the others were classified as Shannon's entropy low. “In our analysis, we automated the spatial distribution index (STSDI) calculations by inputting annotated pathological images and their associated data directly into our analytical pipeline, streamlining processing and enhancing reproducibility.” In our analysis, we automated the spatial distribution index (STSDI) calculations by inputting annotated pathological images and their associated data directly into our analytical pipeline, streamlining processing and enhancing reproducibility. Clinicopathological characteristics of STSDI‐high and ‐low groups The clinicopathological characteristics of patients in the STSDI‐high and STSDI‐low groups are shown in Table . Overall, these two groups showed no significant differences in clinicopathological factors, including sex, age, smoking history, clinical stage, tumor size, pathological nodal status, pathological stage, lymphovascular invasion, visceral pleural invasion, tumor budding, or adjuvant therapy. Similarly, the analysis revealed no statistical difference in FDG‐PET SUVmax values between the groups (Figure ). Correlations between STSDI and patient survival To elucidate the clinical significance of the spatial distribution of cancer cells and stroma in squamous cell carcinoma, we analyzed the correlations between STSDI and patient outcomes. Log‐rank analysis showed that the STSDI‐low group had a significantly shorter RFS than the STSDI‐high group (Figure ). The 5‐years RFS rates were 49.5% in the STSDI‐low group and 77.0% in the STSDI‐high group. Furthermore, disease‐specific, and OS were significantly shorter in STSDI‐low tumors than in STSDI‐high tumors (Figure ; 5‐years DSS: 53.6% vs. 81.5%, 5‐years OS: 39.0% vs. 67.0%). In contrast, there were no significant differences in RFS, DSS, and OS between the Shannon's entropy high and entropy‐low groups (Figure ). In addition, although there were no significant differences in RFS, DSS, and OS between the STSDI‐high and ‐low groups in pStage I, STSDI‐low tumors had significantly shorter RFS, DSS, and OS than STSDI‐high tumors in pStage II and III (Figure ). Univariate and multivariate Cox regression analysis for patient survival To investigate the impact of the STSDI and other clinicopathological characteristics on patient outcomes in detail, COX regression analysis was performed (Tables and ). Univariate analysis showed that low STSDI in addition to pStage and adjuvant therapy was an unfavorable factor for RFS ( p < 0.005, HR 2.894 (1.552–5.396)). Moreover, multivariate analysis revealed that a low STSDI was an independent unfavorable factor ( p < 0.005, HR 2.668 (1.413–5.038)) as well as pStage ( p < 0.005, HR 4.638 (2.369–9.080)). In addition, STSDI, visceral pleural invasion, and pStage were found to be unfavorable factors for disease‐specific survival in the univariate analysis, and multivariate analysis showed that they were independent predictors of disease‐specific death (STSDI: p < 0.005, HR 3.057 (1.470–6.360), pleural invasion: p = 0.010, HR 2.360 (1.225–4.549), pStage: p < 0.005, HR 4.964 (2.432–10.132)). Growth patterns of squamous cell carcinoma and STSDI Considering the histopathological aspects, it is conjectured that the spatial distribution of cancer cells and stroma reflects the growth patterns of cancer cells. As shown in Figure , Case 1 had a large area of non‐destructive growth patterns in the tumor (non‐destructive growth patterns: 40%) and exhibited a high STSDI of 0.165. However, even with a similar TSR in Case 88 shown in Figure , the whole tumor consisted of destructive growth patterns of cancer cells with fibrotic stroma (non‐destructive growth patterns: 0%), and the STSDI of this tumor was 0.156. For statistical analysis, the cases were divided into two categories based on the proportion of non‐destructive growth patterns: non‐destructive growth‐low, where the proportion was 20% or below, and non‐destructive growth‐high, where it exceeded 20%. The results showed that the former group, with a smaller proportion of non‐destructive growth patterns, had a significantly lower STSDI than the latter group, with a larger proportion of non‐destructive growth patterns (Figure ). DISCUSSION In this study, we identified the novel pathological characteristic of the spatial distribution of cancer cells and stroma, and demonstrated its clinical significance in predicting patient prognosis in peripheral lung squamous cell carcinoma. To characterize the spatial distribution of cancer cells and the surrounding stroma, we performed machine learning‐based image segmentation using immunostained images of cytokeratin and incorporated them with texture features. A machine learning method, SLIC, enabled us to precisely evaluate cytokeratin AE1/3‐positive cancer cell component and negative cancer stroma. In the field of radiology, previous studies have shown that textural features, including Shannon's entropy in radiological images, can predict patient outcomes. , , Furthermore, texture analysis has recently been used to analyze pathological tissue images, including the distribution of immune cells and Ki67 expression in cancer cells. , These results prompted us to analyze the textural features of each tumor. Especially, for the quantification of the spatial distribution, we employed one texture feature, spatial entropy, because it allows us to include spatial information of the cancer elements by multiplying and dividing Shannon's entropy according to their Euclidean distance. By employing spatial entropy, previous studies evaluated the local spatial distribution of immune cells in breast cancer and pancreatic cancer tissues. , However, in this study, we utilized spatial entropy to comprehensively analyze the spatial relationship between cancer cells and the surrounding stroma. Additionally, our analysis was conducted by expanding it to a larger area to include all tumor regions. In the present study, STSDI was significantly correlated with both tumor recurrence and patient prognosis, whereas Shannon's entropy, which did not account for the positional information of cancer cells and stroma, did not show any significant association with the prognosis. These results indicate that not only the proportion of tumor elements, but also their spatial distribution is an important characteristic for the evaluation of cancer tissues. Furthermore, our results support the proposition that the spatial interaction between cancer cells and stroma in the tumor microenvironment is a determinant of tumor behavior. , In Stage II–III patients, STSDI proves instrumental in prognostic stratification, thus potentially aiding adjuvant therapy planning. While adjuvant therapy is typically recommended for these stages by current guidelines, its practical application is often influenced by variables such as patient age and existing comorbidities. Our method improves the accuracy of identifying patients who may require more aggressive therapeutic strategies. Moreover, our results showed that low STSDI was an independent predictor of tumor recurrence and disease‐specific death. Interestingly, although STSDI values were not associated with the pathological findings including pathological stage, lymphovascular invasion, visceral pleural invasion, and tumor budding, a low STSDI correlated with destructive growth patterns of peripheral lung squamous cell carcinoma. Considering that the destructive growth pattern predicts an unfavorable prognosis as reported in previous studies, including ours, , , it is postulated that STSDI reflects the malignant potential of cancer, which may explain the poorer patient outcomes in STSDI‐low patients. In contrast to Koike's study, the TSR was not found to be a prognostic factor in this study, possibly because of the difference in segmentation methods, particularly because necrotic regions were not considered in this study. The limitation of this study is that it was a retrospective study conducted at a single institution. Our investigation focused on peripheral squamous cell carcinoma rather than central tumor, this is because the frequent occurrence of obstructive pneumonia associated with central tumors make it challenging to precisely evaluate whole tumor area. Recent advances in digital pathology and computational image analysis have enhanced the visualization of diverse cancer elements. , , By leveraging these advances, our study introduces a novel framework for characterizing the spatial distribution of cancer cells and the surrounding stroma. Importantly, while most studies focused on local proximity between the cancer elements within small regions of interest (ROIs), , we attempted to analyze the entire tumor region on the tissue slide by incorporating a mathematical algorithm into the histological image analysis. In addition, our framework automates the computational analysis of spatial distributions, enabling pathologists to conduct analyses simply by inputting annotation data and histological images, without the need for specialized computer skills. We anticipate that this facilitates its practical use in routine diagnostics, indicating a promising path toward widespread clinical application. Our novel approach enabled us to predict the malignant behavior of peripheral lung squamous cell carcinoma. Further investigations are required to explore whether this approach can be effectively implemented for different cancer types; however, we believe that our approach provides a new perspective on the pathological analysis of cancer tissues. Tetsuro Taki: Conceptualization; data curation; formal analysis; funding acquisition; investigation; methodology; project administration; visualization; writing – original draft; writing – review and editing. Yutaro Koike: Investigation; resources; writing – review and editing. Masahiro Adachi: Writing – review and editing. Shingo Sakashita: Writing – review and editing. Naoya Sakamoto: Writing – review and editing. Motohiro Kojima: Writing – review and editing. Keiju Aokage: Resources; writing – review and editing. Shumpei Ishikawa: Writing – review and editing. Masahiro Tsuboi: Resources; writing – review and editing. Genichiro Ishii: Conceptualization; investigation; writing – review and editing. The authors thank editage (website) for English‐ language editing. This work was supported in part by the Japan Society for the Promotion of Science KAKENHI grant number 21K20821. T.T, Y.K., M.A., S.S., N.S., M.K., K.A. and M.T. declare no conflict of interest in association with the present study. S.I. and G.I. are editorial board members of Cancer Science. Approval of the research protocol by an Institutional Review Board: Institutional review board approval was obtained from the participating institution (approval number: 2019–091), and the study was publicly disclosed on the official websites of the conducting institutions. Research participants were provided the opportunity to decline participation. This study adhered to the principles outlined in the Declaration of Helsinki. Informed Consent: N/A. Registry and the Registration No. of the study/trial: N/A. Animal Studies: N/A. Figure S1. Figure S2. Table S1. |
Establishing a standing patient advisory board in family practice research: A qualitative evaluation from patients' and researchers' perspectives | 25d9c419-79da-470c-a40e-e6175ef51a6d | 11180710 | Family Medicine[mh] | INTRODUCTION Well prepared patient and public involvement (PPI) is an integral part of high‐quality research and an effective tool to prevent so‐called ‘research waste’. , , Integrating patients' and providers' perspectives in research in an early stage fosters the feasibility of research projects. Furthermore, PPI contributes to the development of patient‐relevant care solutions within studies and increases the transferability of study results into primary care. Therefore, the establishment of formats and structures for stakeholder involvement—that is the involvement of family practitioners, health care assistants and patients—is a significant feature in most family practice‐based research networks (FPBRNs) in Germany. , This development was boosted in 2020 when the Federal Ministry of Education and Research funded six regional and transregional FPBRNs encompassing 23 academic family medicine departments and a coordination office within the initiative DESAM‐ForNet. The initiative aims to foster high‐quality research in the outpatient setting by developing sustainable, reliable and scalable research structures that compare to research structures in the inpatient setting. Over the course of time, FPBRNs will add evidence that reflects family practitioners', medical health assistants' and their patient populations' tasks and needs to the overall body of evidence on prevention, diagnostics and therapies. FPBRNs have started to develop qualification programs for research practices, worked on solutions to gather patient data from research practices, conducted several (clinical) interventional pilot studies within the networks and are in close contact with family practices all over Germany. To incorporate patients' perspective in research within our FPBRN network Frankfurt am Main (ForN) we decided to establish a patient advisory board (PAB) as an initial component of our network structures. Different from approaches of study‐specific PABs, we aimed to establish a standing PAB that is located within our FPBRN and selectively involved within different studies. Furthermore, we aimed to include patients that represent the broad patient population from family practices, that is, from all ages, genders, social backgrounds and with and without preconditions. We chose the term ‘patient advisory board’ and addressed potential members as ‘patients’, because we aimed to focus on their role in family practice, namely patients. With regard to the inclusion criteria, we could also have chosen the term ‘citizens’. The diversity of conditions and experiences of PAB members is another distinct feature compared to other study‐specific PPI approaches in healthcare that often include patients with a similar medical condition. Subsequently, we aimed to include persons who contribute their individual everyday experiences with healthcare in family practice in contrast to patient representatives from patient organizations with a focus on a specific condition. We did not actively reach out to caregivers, even though some patients may hold a double role. In this paper, we describe the establishment of a standing PAB within our FPBRN ForN, outline methods and content of the PAB's research involvement and present PAB members' and researchers' perspective on these processes. MATERIALS AND METHODS The reporting in this article follows the GRIPP2 Reporting Guideline. 2.1 PPI strategy and level of PPI We aimed to establish a standing PAB as part of a sustainable and ready‐to‐use research structure and to create a relationship of two‐way learning and mutual trust. This is to be fostered by a coordinator as a stable contact person that organizes PAB meetings, is responsive to barriers and questions from PAB members and operates as a mediator between the FPBRNs' different study teams and the PAB. The coordinator was trained and experienced in qualitative and participatory methods as well as workshop design and moderation. PAB members are involved via participatory workshop meetings (3–4 per year) or in one‐on‐one consultations, for which they get financially reimbursed. Predominately, the level of involvement is defined as ‘consultation’, that is, ‘asking members of the public for their views and use these views to inform decision making’. 2.2 Recruitment We aimed to include patients that represent the broad patient population from family practices, that is, from all ages, gender, social backgrounds and with and without preconditions. Therefore, we had no inclusion criteria besides the ability to participate and communicate in PAB meetings. Furthermore, to maintain clarity of roles, we decided to exclude persons with a background in health care research. We used several multimodal recruitment strategies and recruited patients between August 2021 and April 2022. (1) We developed an information flyer with a prestamped response postcard that we handed over to 10 interested research practices for display in their waiting rooms. Furthermore, we asked pharmacies to display our information flyer. (2) We talked to interested family practitioners about the PAB, handed over information material and asked them to approach patients they deemed interested individually. (3) We asked patient participants from a former study, in which an intervention was co‐designed, , if they would like to join the PAB. (4) We contacted the coordinator of the standardized patient programme at the Frankfurt University Hospital and asked him to approach standardized patients individually with our PAB information materials. Standardized or simulated patients support medical education by acting like a patient with a certain disease. As they are used to communicating with medical staff, we hoped their barrier to involvement is low, even though they have no formalized medical knowledge and their input is based on personal experiences. (5) We developed a workshop for the citizen sciences programme at Goethe University Frankfurt. The programme's schedule of lectures and workshops open to the public is available in print and online. When patients contacted us, we asked them for a personal phone call. Within this call, we introduced the FPBRN and the Institute of General Practice to them, elaborated their role and tasks as a PAB member, informed them about planned meeting sequences, provided room for questions and announced the date for the next planned onboarding workshop. After the phone call, we sent a short questionnaire that asked for contact information, gender, age, medical conditions and their preferred format for PAB meetings (digital or face‐to‐face) as well as consent to data processing. 2.3 Training 2.3.1 Onboarding workshop We designed an onboarding workshop that included information on our FPBRN as well as research topics covered at our institute and introduced the stages of a research project together with examples of patient involvement at each stage. After each information input, we planned for a short group discussion so that PAB members could get to know each other, that is, their expectations of the PAB, their experiences with family medicine and which aspects of family medicine research they found most interesting. We made clear that there is no duty to share experiences and that they could select which parts they wanted to share with the group. Furthermore, we asked everyone to grant confidentiality to experiences shared within meetings. 2.3.2 Technical introduction workshop and technical support Each PAB member was offered a technical introduction workshop in which the functions of the video conference system were practiced. Furthermore, one team member was available during each workshop to solve technical problems with the video conference system via phone. 2.3.3 On‐the‐job training PAB members were informed about the topic and the attending researchers of each PAB workshop via an invitation email. We aimed to minimize the need to prepare in advance, therefore we designed each meeting in a way that allowed PAB members to participate in a meaningful way without preparation. To achieve this, attending researchers were asked to prepare a methods section that introduced the study design and methods of the study that was discussed in the following workshop as well as basic information on the overall aim of the presented study. This ‘on the job’‐training should step by step enhance PAB members' knowledge of research methods while these methods were always presented in the context of the actual study and the workshop on this study. In this manner, we aimed to combine methodological training with study content and therefore to contextualize the PPI activity within the study setting and vice versa. This approach also facilitated researchers' training in PPI ‘on the job’ by developing PAB workshops together with the PPI coordinator. As we aimed to implement and expand PPI activities within the study teams of the FPBRN, we provided methodological counselling in PPI when necessary. Researchers with little experience in PPI could approach the coordinator with a topic they wished to be reflected from the patients' perspective and the coordinator worked together with the researchers to develop a feasible workshop design by reflecting together on question such as: What is a realistic aim for a 2 h workshop and how much content can be discussed within this time? What is the most important question to be discussed? Which changes to the study are actually possible? Which background information is needed for PAB members to discuss the topic? How is this content best presented and prepared for a nonscientific audience? 2.3.4 Glossary We started a glossary in the onboarding workshop and asked PAB members to write each unclear term into the chat. A member from the academic team explained the term immediately and the term was inserted into a glossary that was adapted after each meeting, emailed to participants and displayed in the secure PAB section of the FPBRN's website. 2.4 Evaluation The literature on evaluation of PPI is diverse. While some authors claim that we need to focus more strongly on PPI as a social interaction with regard to power relations, ‘space to talk’ and ‘space to change’, , , , others stress that we need more information on the actual impact of PPI on research, that is, what did really change by involving patients and stakeholders. , Most authors emphasize, however, that we need more information and more reporting on PPI activities altogether. , , , , , In our evaluation of the PAB's activities, we addressed both PPI as a social interaction from PAB members' and researchers' perspectives and assessed PPIs' impact from researchers' perspectives. 2.4.1 Evaluation from PAB members' perspectives After each onboarding workshop and each PPI workshop, we asked PAB members to comment on the workshop via a short online feedback form containing three open questions on process and social interaction: 1—what did you like best today? 2—what did you miss today? 3—is there anything else you want to share with us? The anonymous written answers were transferred onto an Excel sheet and inserted to MAXQDA 2018. We analyzed answers grouped into feedback to the onboarding workshops and project‐specific PAB meetings. Using thematic analysis, we used a deductive approach first and grouped data with regard to the three questions in the online feedback form. The data was then coded inductively: Answers were coded multiple times when they included multiple aspects. Finally, the codes were grouped into themes. These themes are presented in the results section with exemplary quotations from PAB members. However, marginal experiences are also mentioned in the results. 2.4.2 Evaluation from researchers' perspectives To assess the social interaction within the PAB meetings we asked researchers, similar to PAB members, after each PAB meeting, (1) what they liked best today and (2) what they felt was challenging. To assess PPIs impact, we further asked (3) with which aim they had involved the PAB, (4) if they felt this involvement was beneficial for their research and what should be different next time to make it more beneficial, (5) which changes to research were made due to the PAB meeting and (6) whether there was input from the PAB that was not included in the research and why. Written answers were inserted to MAXQDA 2018 and analyzed using thematic analysis. First, we used a deductive approach and grouped data with regard to the 6 questions of the feedback form. The data was then coded inductively: Answers were coded multiple times when they included multiple aspects. Finally, the codes were grouped into themes. These themes are presented in the results section with exemplary quotations from researchers. Marginal experiences are also mentioned in the results. PPI strategy and level of PPI We aimed to establish a standing PAB as part of a sustainable and ready‐to‐use research structure and to create a relationship of two‐way learning and mutual trust. This is to be fostered by a coordinator as a stable contact person that organizes PAB meetings, is responsive to barriers and questions from PAB members and operates as a mediator between the FPBRNs' different study teams and the PAB. The coordinator was trained and experienced in qualitative and participatory methods as well as workshop design and moderation. PAB members are involved via participatory workshop meetings (3–4 per year) or in one‐on‐one consultations, for which they get financially reimbursed. Predominately, the level of involvement is defined as ‘consultation’, that is, ‘asking members of the public for their views and use these views to inform decision making’. Recruitment We aimed to include patients that represent the broad patient population from family practices, that is, from all ages, gender, social backgrounds and with and without preconditions. Therefore, we had no inclusion criteria besides the ability to participate and communicate in PAB meetings. Furthermore, to maintain clarity of roles, we decided to exclude persons with a background in health care research. We used several multimodal recruitment strategies and recruited patients between August 2021 and April 2022. (1) We developed an information flyer with a prestamped response postcard that we handed over to 10 interested research practices for display in their waiting rooms. Furthermore, we asked pharmacies to display our information flyer. (2) We talked to interested family practitioners about the PAB, handed over information material and asked them to approach patients they deemed interested individually. (3) We asked patient participants from a former study, in which an intervention was co‐designed, , if they would like to join the PAB. (4) We contacted the coordinator of the standardized patient programme at the Frankfurt University Hospital and asked him to approach standardized patients individually with our PAB information materials. Standardized or simulated patients support medical education by acting like a patient with a certain disease. As they are used to communicating with medical staff, we hoped their barrier to involvement is low, even though they have no formalized medical knowledge and their input is based on personal experiences. (5) We developed a workshop for the citizen sciences programme at Goethe University Frankfurt. The programme's schedule of lectures and workshops open to the public is available in print and online. When patients contacted us, we asked them for a personal phone call. Within this call, we introduced the FPBRN and the Institute of General Practice to them, elaborated their role and tasks as a PAB member, informed them about planned meeting sequences, provided room for questions and announced the date for the next planned onboarding workshop. After the phone call, we sent a short questionnaire that asked for contact information, gender, age, medical conditions and their preferred format for PAB meetings (digital or face‐to‐face) as well as consent to data processing. Training 2.3.1 Onboarding workshop We designed an onboarding workshop that included information on our FPBRN as well as research topics covered at our institute and introduced the stages of a research project together with examples of patient involvement at each stage. After each information input, we planned for a short group discussion so that PAB members could get to know each other, that is, their expectations of the PAB, their experiences with family medicine and which aspects of family medicine research they found most interesting. We made clear that there is no duty to share experiences and that they could select which parts they wanted to share with the group. Furthermore, we asked everyone to grant confidentiality to experiences shared within meetings. 2.3.2 Technical introduction workshop and technical support Each PAB member was offered a technical introduction workshop in which the functions of the video conference system were practiced. Furthermore, one team member was available during each workshop to solve technical problems with the video conference system via phone. 2.3.3 On‐the‐job training PAB members were informed about the topic and the attending researchers of each PAB workshop via an invitation email. We aimed to minimize the need to prepare in advance, therefore we designed each meeting in a way that allowed PAB members to participate in a meaningful way without preparation. To achieve this, attending researchers were asked to prepare a methods section that introduced the study design and methods of the study that was discussed in the following workshop as well as basic information on the overall aim of the presented study. This ‘on the job’‐training should step by step enhance PAB members' knowledge of research methods while these methods were always presented in the context of the actual study and the workshop on this study. In this manner, we aimed to combine methodological training with study content and therefore to contextualize the PPI activity within the study setting and vice versa. This approach also facilitated researchers' training in PPI ‘on the job’ by developing PAB workshops together with the PPI coordinator. As we aimed to implement and expand PPI activities within the study teams of the FPBRN, we provided methodological counselling in PPI when necessary. Researchers with little experience in PPI could approach the coordinator with a topic they wished to be reflected from the patients' perspective and the coordinator worked together with the researchers to develop a feasible workshop design by reflecting together on question such as: What is a realistic aim for a 2 h workshop and how much content can be discussed within this time? What is the most important question to be discussed? Which changes to the study are actually possible? Which background information is needed for PAB members to discuss the topic? How is this content best presented and prepared for a nonscientific audience? 2.3.4 Glossary We started a glossary in the onboarding workshop and asked PAB members to write each unclear term into the chat. A member from the academic team explained the term immediately and the term was inserted into a glossary that was adapted after each meeting, emailed to participants and displayed in the secure PAB section of the FPBRN's website. Onboarding workshop We designed an onboarding workshop that included information on our FPBRN as well as research topics covered at our institute and introduced the stages of a research project together with examples of patient involvement at each stage. After each information input, we planned for a short group discussion so that PAB members could get to know each other, that is, their expectations of the PAB, their experiences with family medicine and which aspects of family medicine research they found most interesting. We made clear that there is no duty to share experiences and that they could select which parts they wanted to share with the group. Furthermore, we asked everyone to grant confidentiality to experiences shared within meetings. Technical introduction workshop and technical support Each PAB member was offered a technical introduction workshop in which the functions of the video conference system were practiced. Furthermore, one team member was available during each workshop to solve technical problems with the video conference system via phone. On‐the‐job training PAB members were informed about the topic and the attending researchers of each PAB workshop via an invitation email. We aimed to minimize the need to prepare in advance, therefore we designed each meeting in a way that allowed PAB members to participate in a meaningful way without preparation. To achieve this, attending researchers were asked to prepare a methods section that introduced the study design and methods of the study that was discussed in the following workshop as well as basic information on the overall aim of the presented study. This ‘on the job’‐training should step by step enhance PAB members' knowledge of research methods while these methods were always presented in the context of the actual study and the workshop on this study. In this manner, we aimed to combine methodological training with study content and therefore to contextualize the PPI activity within the study setting and vice versa. This approach also facilitated researchers' training in PPI ‘on the job’ by developing PAB workshops together with the PPI coordinator. As we aimed to implement and expand PPI activities within the study teams of the FPBRN, we provided methodological counselling in PPI when necessary. Researchers with little experience in PPI could approach the coordinator with a topic they wished to be reflected from the patients' perspective and the coordinator worked together with the researchers to develop a feasible workshop design by reflecting together on question such as: What is a realistic aim for a 2 h workshop and how much content can be discussed within this time? What is the most important question to be discussed? Which changes to the study are actually possible? Which background information is needed for PAB members to discuss the topic? How is this content best presented and prepared for a nonscientific audience? Glossary We started a glossary in the onboarding workshop and asked PAB members to write each unclear term into the chat. A member from the academic team explained the term immediately and the term was inserted into a glossary that was adapted after each meeting, emailed to participants and displayed in the secure PAB section of the FPBRN's website. Evaluation The literature on evaluation of PPI is diverse. While some authors claim that we need to focus more strongly on PPI as a social interaction with regard to power relations, ‘space to talk’ and ‘space to change’, , , , others stress that we need more information on the actual impact of PPI on research, that is, what did really change by involving patients and stakeholders. , Most authors emphasize, however, that we need more information and more reporting on PPI activities altogether. , , , , , In our evaluation of the PAB's activities, we addressed both PPI as a social interaction from PAB members' and researchers' perspectives and assessed PPIs' impact from researchers' perspectives. 2.4.1 Evaluation from PAB members' perspectives After each onboarding workshop and each PPI workshop, we asked PAB members to comment on the workshop via a short online feedback form containing three open questions on process and social interaction: 1—what did you like best today? 2—what did you miss today? 3—is there anything else you want to share with us? The anonymous written answers were transferred onto an Excel sheet and inserted to MAXQDA 2018. We analyzed answers grouped into feedback to the onboarding workshops and project‐specific PAB meetings. Using thematic analysis, we used a deductive approach first and grouped data with regard to the three questions in the online feedback form. The data was then coded inductively: Answers were coded multiple times when they included multiple aspects. Finally, the codes were grouped into themes. These themes are presented in the results section with exemplary quotations from PAB members. However, marginal experiences are also mentioned in the results. 2.4.2 Evaluation from researchers' perspectives To assess the social interaction within the PAB meetings we asked researchers, similar to PAB members, after each PAB meeting, (1) what they liked best today and (2) what they felt was challenging. To assess PPIs impact, we further asked (3) with which aim they had involved the PAB, (4) if they felt this involvement was beneficial for their research and what should be different next time to make it more beneficial, (5) which changes to research were made due to the PAB meeting and (6) whether there was input from the PAB that was not included in the research and why. Written answers were inserted to MAXQDA 2018 and analyzed using thematic analysis. First, we used a deductive approach and grouped data with regard to the 6 questions of the feedback form. The data was then coded inductively: Answers were coded multiple times when they included multiple aspects. Finally, the codes were grouped into themes. These themes are presented in the results section with exemplary quotations from researchers. Marginal experiences are also mentioned in the results. Evaluation from PAB members' perspectives After each onboarding workshop and each PPI workshop, we asked PAB members to comment on the workshop via a short online feedback form containing three open questions on process and social interaction: 1—what did you like best today? 2—what did you miss today? 3—is there anything else you want to share with us? The anonymous written answers were transferred onto an Excel sheet and inserted to MAXQDA 2018. We analyzed answers grouped into feedback to the onboarding workshops and project‐specific PAB meetings. Using thematic analysis, we used a deductive approach first and grouped data with regard to the three questions in the online feedback form. The data was then coded inductively: Answers were coded multiple times when they included multiple aspects. Finally, the codes were grouped into themes. These themes are presented in the results section with exemplary quotations from PAB members. However, marginal experiences are also mentioned in the results. Evaluation from researchers' perspectives To assess the social interaction within the PAB meetings we asked researchers, similar to PAB members, after each PAB meeting, (1) what they liked best today and (2) what they felt was challenging. To assess PPIs impact, we further asked (3) with which aim they had involved the PAB, (4) if they felt this involvement was beneficial for their research and what should be different next time to make it more beneficial, (5) which changes to research were made due to the PAB meeting and (6) whether there was input from the PAB that was not included in the research and why. Written answers were inserted to MAXQDA 2018 and analyzed using thematic analysis. First, we used a deductive approach and grouped data with regard to the 6 questions of the feedback form. The data was then coded inductively: Answers were coded multiple times when they included multiple aspects. Finally, the codes were grouped into themes. These themes are presented in the results section with exemplary quotations from researchers. Marginal experiences are also mentioned in the results. RESULTS 3.1 PAB members Today the FPBRNs' PAB has 11 members ranging from 17 to 70 years with and without pre‐existing conditions (see Table ). Only one patient preferred digital to face‐to‐face meetings at the recruitment stage. Nevertheless, the COVID‐19 pandemic forced us to hold most meetings digitally. No PAB member resigned because of the predominantly digital format. 3.2 Recruitment strategies The most successful recruitment strategy for patients to become PAB members was when they were informed about the PAB individually by their family practitioner. No patient was recruited via the display of flyers and information material in family practitioners' waiting rooms only. Two patients contacted us because they were informed about the PAB by a friend: a recruitment strategy we did not plan for in advance (Table ). 3.3 PPI workshops and PAB activities From October 2021 to July 2023, we conducted two digital onboarding workshops for training and trained one PAB member individually. We conducted three digital and two in‐person project‐specific workshops in which the PAB gave input on research projects of the FPBRN. At these workshops, the coordinator of the PAB was present together with researchers from the project in question. Three PAB members gave feedback on two lay‐language brochures with project results. We invited the PAB to the ‘Day of Family Medicine’ at our university hospital, and three PAB members joined us for lunch and the keynote lecture on ‘Patient Involvement in Family Medicine Research’. Furthermore, PAB members joined the anniversary celebration of our FPBRN and two of them took part in a plenary discussion on ‘Research in the FPBRN as an interprofessional undertaking’ (Table ). 3.4 Evaluation from PAB members' perspectives We analyzed 10 feedback forms on two onboarding workshops and 30 feedback forms commenting on five project‐specific workshops. 3.4.1 Onboarding—Intelligible information and congenial atmosphere Concerning the onboarding workshops, PAB members positively stressed the intelligibility of the information provided. Concerning content, they especially liked the display of PAB members’ roles and tasks and the introduction of the FPBRN. The responsiveness of the researchers who moderated was stressed: ‘It was a very comprehensible and informative orientation meeting. I am very happy to be able to participate. The coordinators chaired the meeting very well and with a lot of empathy’. PAB members liked that ‘everything was explained, in a friendly and patient manner’. PAB members furthermore mentioned the ‘congenial open atmosphere’ and felt that they were a ‘good mixture’. Two participants wished for more time get to know the other PAB members and a comprehensive introductory round. One member wished to meet in person. 3.4.2 Project‐specific workshops—Exchange of perspectives and exciting topics In PAB members’ feedback on the benefits of the project‐specific workshops ‘exchange’ was the predominant topic: The PAB members stressed that they liked the exchange of ideas and perspectives with other PAB members, the ‘exciting and open discussions’ and the extra time to get to know each other. Similar to the onboarding workshops, PAB members liked the ‘intelligible presentation’ and ‘graphic explanations’. They also mentioned the content of the five project‐specific meetings positively. They liked the ‘interesting information’ and the ‘exciting, future‐oriented topic’. One PAB member summarized: ‘It was highly informative. I liked the topic, the presentations and the exchange very much’. Answers on what PAB members felt was lacking were heterogeneous. While most had no wishes, the wish for more time to answer questions and give input was articulated twice. Two persons wished for more information on how the PAB members’ feedback was included into the research projects, and one person wished to get to know how the project overall went on. Furthermore, in‐person meetings were wished for twice and one person wished for materials in advance to prepare for the meetings. The fourth and fifth meeting finally took place in person. The members present stressed their appreciation of the ‘personal and direct’ in‐person discussions and felt that ‘meeting in‐person helps us to move forward’. 3.5 Evaluation from researchers' perspectives We included 14 feedback sheets on five project‐specific workshops from researchers in the analysis. Similar to PAB members, researchers very often underlined the open and lively discussions within the PAB: ‘[I liked best] that everyone was involved, experiences were shared openly and a dialogue evolved between board members and researchers’. Mentioned challenges encompassed time management and appropriate communication: one researcher found it hard to interrupt because discussions were so lively and enthusiastic while another one found it challenging to ‘keep the flow of the conversation running’. Furthermore, the preparation of study results for a patient audience was mentioned as a difficult task, while this preparation was also seen as a benefit, because it helped to reflect again on the projects’ most important results, anticipating the patients’ perspective. All researchers felt the PAB meetings were helpful and productive. Two project‐specific workshops discussed study results with patients. In these cases, concrete changes could not be named while the PAB's input helped researchers in weighing their assumptions and research findings from patients’ perspectives and deciding on future research: ‘The workshop underlined our findings from patients’ perspectives, respectively a certain topic was strengthened that patients felt was especially important’. In three other workshops, PAB members were involved in studies in progress, that is, the selection of indications for a systematic review proposal, checking a patient questionnaire on comprehensibility and relevance and giving feedback on a prototype of information material on hypertension. In these cases, researchers also felt that the PAB's input was beneficial and improved the research a lot, while it was easier for them to name concrete changes to the study based on PAB members’ input. However, most researchers also highlighted obstacles in transferring the PAB's input into research. For example, one researcher mentioned that it might be challenging to decide which input to prioritize given the diverse and sometimes contradicting perspectives of the PAB members. Furthermore, structural and methodological barriers were mentioned such as using standardized items in a questionnaire that therefore can hardly be changed as well as the limited overall length of the questionnaire: ‘When it comes to validated items for the calculation of an index – there's very little room for adaptions. That's why we cannot implement some of the PAB's recommendation for methodological reasons’. Researchers also named time constraints and deadlines from funding agencies as barriers to fully integrate the PAB members’ feedback. In other cases, the processing of the PAB members’ feedback depended on cooperation partners and was therefore not predominantly in the hands of the attending researchers: ‘Naming concrete changes is difficult, because we do not solely decide about the implementation. Having said this, I believe that the PAB's stressing of personal communication between patients, health care assistants and family practitioners was important for the future course of the project and that the PAB affected this future course’. PAB members Today the FPBRNs' PAB has 11 members ranging from 17 to 70 years with and without pre‐existing conditions (see Table ). Only one patient preferred digital to face‐to‐face meetings at the recruitment stage. Nevertheless, the COVID‐19 pandemic forced us to hold most meetings digitally. No PAB member resigned because of the predominantly digital format. Recruitment strategies The most successful recruitment strategy for patients to become PAB members was when they were informed about the PAB individually by their family practitioner. No patient was recruited via the display of flyers and information material in family practitioners' waiting rooms only. Two patients contacted us because they were informed about the PAB by a friend: a recruitment strategy we did not plan for in advance (Table ). PPI workshops and PAB activities From October 2021 to July 2023, we conducted two digital onboarding workshops for training and trained one PAB member individually. We conducted three digital and two in‐person project‐specific workshops in which the PAB gave input on research projects of the FPBRN. At these workshops, the coordinator of the PAB was present together with researchers from the project in question. Three PAB members gave feedback on two lay‐language brochures with project results. We invited the PAB to the ‘Day of Family Medicine’ at our university hospital, and three PAB members joined us for lunch and the keynote lecture on ‘Patient Involvement in Family Medicine Research’. Furthermore, PAB members joined the anniversary celebration of our FPBRN and two of them took part in a plenary discussion on ‘Research in the FPBRN as an interprofessional undertaking’ (Table ). Evaluation from PAB members' perspectives We analyzed 10 feedback forms on two onboarding workshops and 30 feedback forms commenting on five project‐specific workshops. 3.4.1 Onboarding—Intelligible information and congenial atmosphere Concerning the onboarding workshops, PAB members positively stressed the intelligibility of the information provided. Concerning content, they especially liked the display of PAB members’ roles and tasks and the introduction of the FPBRN. The responsiveness of the researchers who moderated was stressed: ‘It was a very comprehensible and informative orientation meeting. I am very happy to be able to participate. The coordinators chaired the meeting very well and with a lot of empathy’. PAB members liked that ‘everything was explained, in a friendly and patient manner’. PAB members furthermore mentioned the ‘congenial open atmosphere’ and felt that they were a ‘good mixture’. Two participants wished for more time get to know the other PAB members and a comprehensive introductory round. One member wished to meet in person. 3.4.2 Project‐specific workshops—Exchange of perspectives and exciting topics In PAB members’ feedback on the benefits of the project‐specific workshops ‘exchange’ was the predominant topic: The PAB members stressed that they liked the exchange of ideas and perspectives with other PAB members, the ‘exciting and open discussions’ and the extra time to get to know each other. Similar to the onboarding workshops, PAB members liked the ‘intelligible presentation’ and ‘graphic explanations’. They also mentioned the content of the five project‐specific meetings positively. They liked the ‘interesting information’ and the ‘exciting, future‐oriented topic’. One PAB member summarized: ‘It was highly informative. I liked the topic, the presentations and the exchange very much’. Answers on what PAB members felt was lacking were heterogeneous. While most had no wishes, the wish for more time to answer questions and give input was articulated twice. Two persons wished for more information on how the PAB members’ feedback was included into the research projects, and one person wished to get to know how the project overall went on. Furthermore, in‐person meetings were wished for twice and one person wished for materials in advance to prepare for the meetings. The fourth and fifth meeting finally took place in person. The members present stressed their appreciation of the ‘personal and direct’ in‐person discussions and felt that ‘meeting in‐person helps us to move forward’. Onboarding—Intelligible information and congenial atmosphere Concerning the onboarding workshops, PAB members positively stressed the intelligibility of the information provided. Concerning content, they especially liked the display of PAB members’ roles and tasks and the introduction of the FPBRN. The responsiveness of the researchers who moderated was stressed: ‘It was a very comprehensible and informative orientation meeting. I am very happy to be able to participate. The coordinators chaired the meeting very well and with a lot of empathy’. PAB members liked that ‘everything was explained, in a friendly and patient manner’. PAB members furthermore mentioned the ‘congenial open atmosphere’ and felt that they were a ‘good mixture’. Two participants wished for more time get to know the other PAB members and a comprehensive introductory round. One member wished to meet in person. Project‐specific workshops—Exchange of perspectives and exciting topics In PAB members’ feedback on the benefits of the project‐specific workshops ‘exchange’ was the predominant topic: The PAB members stressed that they liked the exchange of ideas and perspectives with other PAB members, the ‘exciting and open discussions’ and the extra time to get to know each other. Similar to the onboarding workshops, PAB members liked the ‘intelligible presentation’ and ‘graphic explanations’. They also mentioned the content of the five project‐specific meetings positively. They liked the ‘interesting information’ and the ‘exciting, future‐oriented topic’. One PAB member summarized: ‘It was highly informative. I liked the topic, the presentations and the exchange very much’. Answers on what PAB members felt was lacking were heterogeneous. While most had no wishes, the wish for more time to answer questions and give input was articulated twice. Two persons wished for more information on how the PAB members’ feedback was included into the research projects, and one person wished to get to know how the project overall went on. Furthermore, in‐person meetings were wished for twice and one person wished for materials in advance to prepare for the meetings. The fourth and fifth meeting finally took place in person. The members present stressed their appreciation of the ‘personal and direct’ in‐person discussions and felt that ‘meeting in‐person helps us to move forward’. Evaluation from researchers' perspectives We included 14 feedback sheets on five project‐specific workshops from researchers in the analysis. Similar to PAB members, researchers very often underlined the open and lively discussions within the PAB: ‘[I liked best] that everyone was involved, experiences were shared openly and a dialogue evolved between board members and researchers’. Mentioned challenges encompassed time management and appropriate communication: one researcher found it hard to interrupt because discussions were so lively and enthusiastic while another one found it challenging to ‘keep the flow of the conversation running’. Furthermore, the preparation of study results for a patient audience was mentioned as a difficult task, while this preparation was also seen as a benefit, because it helped to reflect again on the projects’ most important results, anticipating the patients’ perspective. All researchers felt the PAB meetings were helpful and productive. Two project‐specific workshops discussed study results with patients. In these cases, concrete changes could not be named while the PAB's input helped researchers in weighing their assumptions and research findings from patients’ perspectives and deciding on future research: ‘The workshop underlined our findings from patients’ perspectives, respectively a certain topic was strengthened that patients felt was especially important’. In three other workshops, PAB members were involved in studies in progress, that is, the selection of indications for a systematic review proposal, checking a patient questionnaire on comprehensibility and relevance and giving feedback on a prototype of information material on hypertension. In these cases, researchers also felt that the PAB's input was beneficial and improved the research a lot, while it was easier for them to name concrete changes to the study based on PAB members’ input. However, most researchers also highlighted obstacles in transferring the PAB's input into research. For example, one researcher mentioned that it might be challenging to decide which input to prioritize given the diverse and sometimes contradicting perspectives of the PAB members. Furthermore, structural and methodological barriers were mentioned such as using standardized items in a questionnaire that therefore can hardly be changed as well as the limited overall length of the questionnaire: ‘When it comes to validated items for the calculation of an index – there's very little room for adaptions. That's why we cannot implement some of the PAB's recommendation for methodological reasons’. Researchers also named time constraints and deadlines from funding agencies as barriers to fully integrate the PAB members’ feedback. In other cases, the processing of the PAB members’ feedback depended on cooperation partners and was therefore not predominantly in the hands of the attending researchers: ‘Naming concrete changes is difficult, because we do not solely decide about the implementation. Having said this, I believe that the PAB's stressing of personal communication between patients, health care assistants and family practitioners was important for the future course of the project and that the PAB affected this future course’. DISCUSSION PAB members stressed the fruitful and open atmosphere, appreciated the changing topics of each meeting and liked the exchange of ideas and perspectives with one another and the researchers. The building of this relationship succeeded, even though most meetings took place in a digital setting by planning for time to get to know each other and social interaction within each meeting. With the end of pandemic‐related restrictions of social contact, many PAB members strongly appreciated meeting in person. Others pointed out the increasing challenge of combining PAB activities with work duties when travelling to in‐person PAB meetings. In the future, a mix of in‐person and digital meetings seems feasible. The most successful recruitment strategy was family practitioners inviting patients personally to join the PAB. Other successful recruitment strategies also involved personal interactions, while the sole display of flyers in family practices and pharmacies did not motivate any patients to join the PAB. This stresses the importance of trust and sustainable relationships in PPI, while it also raises the question of representation (see Section ). The preparation of research material for workshops with the PAB was seen as demanding by some researchers, while it paid off both for researchers—who reflected on the significance of their research for patients and the public—and for PAB members who appreciated the ‘intelligible presentation’ and ‘graphic explanations’ a lot. While all researchers felt that the PAB meetings played a crucial role in weighing findings and emphasizing certain aspects of their projects, some researchers could not name concrete changes that were based on the PAB meetings. This was partly due to the content of the meetings, that is, discussions of project results, but also to methodological and structural barriers to implementation such as standardized questionnaire items, deadlines from funding agencies or the need to come to terms with cooperating partners. These barriers relate to contemporary research structures that are in many cases highly formalized, competitive, involving multiple players and dependent on project‐based external funding. In these surroundings, the topic of providing ‘space to talk’ but also providing and being transparent with regard to ‘space to change’ is especially important. Researchers must communicate openly on research structures, but also on the choices they make and the reasons for these choices when it comes to actual changes made to research projects based on PPI. This is important to prevent ‘sham participation’ , and because PAB members stressed the importance of being informed about the impact of their meetings and the progress of the research projects they discussed. Concerning authorship and acknowledgement of contributions to research, we initiated a discussion within the PAB on the importance of visibility by providing individual names and the possibility of protection by using a group identity. The PAB decided that they do not want their names to appear on the FPBRN's website or elsewhere. In publications the PABs' contribution is honoured in the acknowledgements. With regard to the current level of involvement that is the PAB's counselling on research projects within single sessions, coauthorship was not feasible so far, but this may change in the future. In case individual members decide to contribute to research‐associated events such as panel discussions, they are represented by name just like all other speakers. The PAB's decision on this topic is a matter of constant reconsideration by members. The COVID‐19 pandemic and the switch to digital formats might have prevented some patients from joining the meetings that were predominantly digital during the pandemic. At first, we hesitated to start the PAB in an online‐only environment. Because of very positive experiences with digital PPI and encouraging evaluation results from patients in a study on multimedication, we decided to get started anyway. We implemented the supporting tools used in the study such as technical introduction workshops and technical support throughout the meetings and incorporated extra time for discussions and getting‐to‐know each other. None of the PAB members dropped out during the pandemic because of the digital format, but some might not have joined at all due to barriers in soft‐ and hardware. On the other hand, we know from other studies as well as feedback from PAB members that digital formats can also reduce barriers, as travel restrictions do not apply and participants can tailor their personal environment to suit their individual needs. , , At the end of the pandemic, most PAB members wished for a meeting in person and felt that ‘meeting in‐person helps us to move forward’. We will focus on the shift from online to in‐person meetings and how this may influence communication dynamics within the PAB. LIMITATIONS Even though we theoretically gave everyone interested and present in a family research practice the chance to join the PAB by displaying flyers in waiting rooms, our recruitment strategies might be selective. This might be especially true as most patients joined by personal invitation through their family practitioners, and we have no information why family practitioners approached which patients. This touches the topic of representation, that is always a matter in PPI, when it comes to a selected group of patients speaking for a larger group. We aimed to approach PAB members as patient experts on eye level and therefore decided to not collect a lot of private, health‐related data from them. Therefore, we can only draw conclusions on the diversity of the PAB on the basis of age, gender and pre‐existing health condition (yes or no). Even though our PAB does represent a wide range of ages and health conditions, we cannot provide information on demographics like migration status or educational level. Also, our initial recruitment strategy was not based on either of these characteristics, but we aim to consider this in the future. Furthermore, we aim to stress that our PAB consists of persons that contribute their individual everyday experiences with healthcare in family practice, given the fact that we ruled out patient representatives from patient organizations. By doing so, we aimed to prevent a special condition from becoming the focus of our discussions in which the family practice is always at the centre. Nevertheless, this focus on individual experiences also excludes the wide range of background knowledge and accumulated knowledge of different patient experiences that patient representatives may provide. Finally, the evaluation presented in this article is based on PAB members' and researchers' feedback on a couple of single PAB meetings. Even though we collected feedback data at several points in time, our evaluation data contains no information on PAB members' experiences with the overall PPI process within the FPBRN, i.e. if they had wished for more training, a different level of involvement, or another PPI format different from group workshops. In the future, we plan for an overarching evaluation that shall assess patients' overall experiences with the PAB. There are some standardized tools to assess patients' experiences with PPI as well as frameworks that will inspire our evaluation. , , Nevertheless, we aim to develop a guideline for qualitative interviews that addresses the specific tasks, processes and structures of the FPBRN and the PAB within this network to adjust the PAB and PPI activities accordingly. Concerning the researchers’ perspectives, our evaluation results are limited as well. First, similar to patients, researchers were surveyed at one point in time only, that is 1–2 weeks after the workshop. Reflections, processes and changes to research that occurred after this period could not be assessed. Second, our evaluation is limited to those researchers within the FPBRN that had direct contact with the PAB within a workshop. Most probably these researchers had a positive mindset and were open towards PPI. An extended evaluation could survey all researchers of the FPBRN and assess their attitudes towards PPI in general as well as their knowledge and perception of the PAB to assess the structural and longitudinal changes that the PAB initiated. , , The evaluation results will then inform future directions of the PAB and of PPI activities within the FPBRN in general. CONCLUSION The establishment of a standing PAB in family practice research is feasible and productive both from patients' and researchers' perspectives. PABs should be considered an integral part of research infrastructure in family practice research and beyond and their establishment should be fostered further. Jennifer Engler : Conceptualization; investigation; methodology; writing—review and editing; writing—original draft; project administration; formal analysis; resources; supervision; data curation; validation. Fabian Engler : Writing—review and editing; data curation; investigation. Meike Gerber : Writing—review and editing; investigation; data curation. Franziska Brosse : Writing—review and editing. Karen Voigt : Writing—review and editing; funding acquisition; supervision; project administration; resources. Karola Mergenthal : Supervision; resources; project administration; writing—review and editing; Conceptualization. The authors declare no conflict of interest. We informed the local ethics committee of The University Hospital of Goethe University Frankfurt am Main about our intention to establish a patient advisory board (PAB) and to hold patient and public involvement workshops with PAB members. The ethics committee expressed no concerns and waived a formal approval on the basis of the Medical Association's professional code of conduct in Hesse/Germany (§ 15 BO hess. Ärzte). All PAB members gave written informed consent to the processing of workshop results for academic purposes. |
Banff 2019 Meeting Report: Molecular diagnostics in solid organ transplantation–Consensus for the Banff Human Organ Transplant (B‐HOT) gene panel and open source multicenter validation | bd25df73-89af-40e6-b731-be0118d2e89b | 7496585 | Pathology[mh] | INTRODUCTION The XV Banff Conference for Allograft Pathology was held on September 23‐27, 2019, in Pittsburgh,Pennsylvania. One main topic, continuing a theme from two previous Banff meetings, was to include applications of molecular techniques for transplant biopsies and to articulate a roadmap for the clinical adoption of molecular transplant diagnostics for allograft biopsies. This meeting report summarizes the progress made by the Banff Molecular Diagnostics Working Group (MDWG) and the resulting next steps from the 2019 conference. CHALLENGES IN MOLECULAR TRANSPLANT DIAGNOSTICS The MDWG identified several challenges in the clinical application of molecular diagnostics. Different assays that measure different sets of genes validated for slightly different clinical contexts create a major analytical challenge. Enrolling patients into multicenter molecular diagnostic trials becomes problematic if local molecular diagnostic tests and risk stratification are done by noncomparable assays. The lack of a diagnostic gold standard for clinical validation of new molecular diagnostics requires multicenter standardization and independent validation in prospective randomized trials. Clinical and pathologic indications for molecular testing need to be defined and validated. Molecular tests must be cost effective to increase diagnostic utility beyond histopathology. For useful molecular diagnostics turnaround time needs to match immediate clinical needs. The integration of molecular tests with other diagnostic and clinical information requires standardization to make diagnosis and risk stratification comparable between centers. Industry partnerships are needed to advance the field, but transparency and appropriate disclosure of potential conflicts of interest are paramount. The MDWG believes that the present report shows a pathway that can address many of these issues. EVOLUTION OF MOLECULAR TRANSPLANT DIAGNOSTICS Over the past 20 years, we estimate that more than 4000 organ transplant biopsies have been studied by whole transcriptome microarrays. These have been conducted independently by several research groups, covering transplant biopsies of kidneys , , , , and, to a lesser extent, other organs. , , , , , Different analytical approaches addressing relevant research questions from these data have been made available and reproduced by several research groups and transplant centers, covering a broad spectrum of phenotypes and patient demographics. These studies led to potential diagnostic applications as well as major novel mechanistic insights with changes to the Banff classification, for example, the adoption of C4d‐negative antibody‐mediated rejection (ABMR) and chronic‐active T cell–mediated rejection (TCMR) as new diagnostic categories. , , Using transcriptome arrays the molecular phenotype in renal allografts correlates well with relevant rejection clinical entities and phenotypes. , In liver transplantation, microarray studies confirmed that liver biopsies with TCMR share very similar transcriptional phenotypes with those in renal allograft biopsies. , Transcriptional similarities are also present in heart and lung allograft biopsies. , , , These publications show that groups of genes within certain molecular pathways are statistically significantly associated with specific Banff histological lesions, rejection phenotypes, and Banff diagnostic categories. Transcript analysis also reveals potentially important underlying heterogeneities not perceived by pathology alone within diagnostic groups. In 2013 molecular diagnostics were added as an aspirational goal to the Banff classification. The molecular quantification of endothelial cell associated transcripts and classifier‐based prediction of donor specific antibody‐mediated tissue injury were adopted as diagnostic features/lesions equivalent to C4d for the diagnosis of ABMR. This was noted to be a forward‐looking proposal at the time, because there was no consensus around which endothelial genes should be quantified and no independent multi‐institutional validation for any diagnostic classifier or gene set. The main impetus in 2013 to adopt a molecular diagnostic option into the classification, despite these limitations, was to set the future direction for the Banff classification and to promote collaborative and multi‐institutional, open source efforts to advance the field by validating, standardizing, and making molecular transplant diagnostics accessible to the broad transplant community. This is a foundational value of the Banff consortium. At the 2015 meeting, the Banff MDWG recommended the creation of molecular consensus gene sets as classifiers derived from the overlap between published and reproduced gene lists that associate with the main clinical phenotypes of TCMR and ABMR. Similar roadmaps and processes for clinical adoption have been reviewed extensively and proposed by other key opinion leaders in the field. , , , Collaborative multicenter studies were proposed to close identified knowledge gaps and enable practical molecular diagnostic incorporation into diagnostic classifications. The 2017 Banff meeting identified an initial validated, consensus gene list with potential specific indications for molecular testing. Importantly presented at this meeting was a new technology, Nanostring, which uses robust multiplex transcript quantitation from formalin‐fixed, paraffin‐embedded (FFPE) biopsies. The compelling advantage of NanoString is that it performs transcriptional analysis on routine histological samples allowing correlation of both histologic with molecular phenotypes on the same tissue. CURRENT STATE OF MOLECULAR TRANSPLANT DIAGNOSTICS Most of the published research studies for molecular testing on biopsies has been performed using microarrays on an extra biopsy core stored in RNAlater Stabilization Solution. The pioneering work by Halloran and colleagues was the basis of a commercial test (Molecular Microscope MMDx) now offered by One Lambda Inc. , , , These insightful, prospective studies showed strong associations of transcript patterns with the histological Banff lesions and diagnosis but also identified discrepancies. These discrepancies require further investigation to reveal the optimal integration of histology and molecular biopsy features that are informative of outcome and response to therapy. No prospective randomized outcome trial using microarray assays as the end point has been conducted, in part because of the technical challenges and the long follow‐up required. Although microarray analysis is the most established method for biopsies, alternative approaches, less invasive than a biopsy, are attractive and under investigation, such as urine and blood transcript analysis. Recently, more practical technologies based on FFPE biopsy analysis are now available, in particular the NanoString nCounter system (NanoString Technologies, Seattle, WA). Several NanoString publications using FFPE transplant specimens identify similar transcript associations with the molecular and histologic phenotypes as those reported in microarray studies. , , , , , , , , , , , , , , Among the advantages of NanoString are (1) a separate core processed at the time of biopsy is not required; (2) transcripts are assessed in the same sample analyzed by light microscopy; and (3) large retrospective and longitudinal analyses of archived samples can be readily performed in the setting of multicenter studies, which will enable retrospective randomization with long‐term survival end points available (Table ). Over 1000 publications have reported its application and value. The NanoString system yields comparable results between FFPE and fresh frozen samples, with a higher sensitivity than that of microarrays and about equal to reverse transcription polymerase chain reaction (RT‐PCR). , , This technology in one assay uses color‐coded molecular barcodes that can hybridize directly up to 800 different targets with highly reproducibility. NanoString thereby closes a gap between genome‐wide expression (ie, microarrays and RNA sequencing as whole transcriptome discovery platforms) and mRNA expression profiling of a single target (ie, RT‐PCR). But unlike quantitative RT‐PCR, the NanoString system does not require enzymes and uses a single reaction per sample regardless of the level of multiplexing. Thus, it is simpler for the user and requires less sample per experiment for multiplex experiments, for example, pathway analysis, assessment of biomarker panels, or assessment of custom‐made gene sets. The NanoString system is approved for clinical diagnostics and paired with user‐friendly analytical software, thus representing a simple, relatively fast (24‐hour turnaround time), automated platform that is well poised for integration into the routine diagnostic workflows in existing pathology laboratories. Synthetic DNA standard oligonucleotides, corresponding to each target probe in the panel, allow normalization of expression results between different reagent batches, platforms, and users, This permits standardization of diagnostic thresholds across multiple laboratories, a major challenge using microarrays and RNA sequencing. A major disadvantage of the NanoString approach is the need to predefine the gene panel and the restriction to 800 probes, making it better for follow‐up studies once the discovery phase with microarrays has winnowed the possibilities to the most informative transcripts. The other disadvantages, shared with microarrays and RNASeq, is the loss of anatomic localization and the need for a biopsy. GENERATION OF A BANFF HUMAN ORGAN TRANSPLANT (B‐HOT) PANEL The B‐HOT panel includes the validated genes found informative from major peer reviewed microarray and NanoString studies on kidney, heart, lung, and liver allograft biopsies, identified by the MDWG through literature review. A list of the genes with corresponding key publications is given in the Data . In detail, candidate genes were identified using the key words “transplantation,” “kidney, “heart, ” “lung, ” ‘liver, ” “gene expression, ” “molecule, ” and “transcripts. ” Mining these publications for genes listed as significantly associated with any study variable revealed 2521 publications indexed in PubMed concerning more than 4000 genes. After redundant and duplicate genes were removed, the list contained 1749 genes. Then the MDWG members identified overlap between these genes and genes described in the peer‐reviewed literature , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , as being strongly associated with relevant clinical phenotypes and identified 1050 genes to be considered for inclusion. In the next step, a list including all genes with consensus expert opinion were selected and for which all Hugo duplicates were then combined, leaving 670 unique genes. We initiated discussions with NanoString and learned they would be willing to make our panel widely available. However, their commercial panels typically have 770 genes, so they provided suggestions for addition genes to delineate relevant cellular pathways and cell types that have been used in other panels. Using an independent data‐driven process, NanoString Technologies Inc recommended additional genes within relevant molecular pathways related to the 670 genes that were most informative by their Ingenuity Pathways. The final B‐HOT panel included 758 genes covering the most pertinent genes from the core pathways and processes related to host responses to rejection of transplanted tissue, tolerance, drug‐induced toxicity, transplantation‐associated viral infections (BK polyomavirus, cytomegalovirus, Epstein‐Barr virus) plus 12 internal reference genes for quality control and normalization (Figures and , Table ).Through that approach the B‐HOT gene panel was defined, further engineered, and made commercially available ( https://www.NanoString.com/products/gene‐expression‐panels/gene‐expression‐panels‐overview/human‐organ‐transplant‐panel ). The pathways added to the list are given in Figure and in more detail in the Table . The panel probes were also designed to cover different organ types for transplantation and for sequence homology with nonhuman primates to facilitate preclinical research applications. The panel's broad coverage of inflammatory, adaptive, and innate immune systems; signaling; and endothelial transcripts will likely be largely applicable across organ types but with some expected organ specific variation. Furthermore, parenchymal transcripts will often be organ specific and many have been included (see Table ). We anticipate that continued discovery of other informative transcripts not included in the B‐HOT panel will occur. To provide flexibility, up to 30 custom genes can be added to the B‐HOT panel by an investigator. Although the panel has been commercialized for the nCounter platform, the gene list is not proprietary and probes based on the gene list can be designed to run on any transcript analytical platform. NEXT STEPS: MULTICENTER ANALYTICAL AND CLINICAL VALIDATION The Banff MDWG formed a voluntary, growing, and open international consortium, independent of commercial sponsorship, to develop future steps for validation, analyses, and database sharing. The focus of the next 2 years will be validation of the panel and discovery of the optimal algorithms and gene sets. This will be enabled by (1) the B‐HOT panel and its comprehensive probe standards for comparison between laboratories, batches, and runs; (2) a shared database containing clinical, laboratory, pathological and transcript data; and (3) access to comprehensive sophisticated bioinformatics. The next steps will be to document the analytical validity across laboratories and then determine the clinical validity. The clinical validity will be assessed by analyzing B‐HOT transcripts in 1000 or more clinical biopsies (as of this report the consortium has run the B‐HOT panel on over 600 samples). These results along with standardized clinical and pathologic information will be entered in a shared database, which will be interrogated to discover the most useful algorithms for clinical applications. Analytical validation for regulatory approval must document accuracy, precision, analytical sensitivity (reproducibility, coefficient of variance), reportable ranges, reference interval values, and analytical specificity. Calibration and control procedures must be determined, and the laboratory must be enrolled in external proficiency testing programs. Clinical validation is the next step. Even an assay with perfect analytical validity does not automatically imply association between the test result and a relevant clinical outcome or action. This requires access to relevant patient populations’ material of adequately powered sample size to evaluate assay performance in a real‐world clinical setting. Accordingly, clinical utility of an assay needs to be established by providing evidence of improved, measurable clinical outcome or benefit that is directly related to the use of the test, that is, proof that the test adds significant value to patient care. This also needs to take into consideration how the assay is interpreted, reported, and applied in the context of clinical patient management. Ideally, proper evaluation of an assay's clinical utility requires prospective randomized control trials. The B‐HOT panel will undergo all of these validation steps. In the next 2 years retrospective, well‐annotated cohorts will be analyzed for analytical and clinical validation. The MDWG is aligning joint efforts using available NanoString systems at participating centers for studying a broad spectrum of archived and well‐annotated transplant biopsies. To centralize the resulting multicenter molecular data from archived transplant biopsies together with the related clinical and outcome data, algorithms, and tools for analysis (including explorative analytics, machine learning‐based diagnostic approaches/classifiers, and risk prediction tools) with remote access by users across the world, a data integration platform (DIP) will be built (Figure ). Participating centers will be able to upload routinely collected transplant‐related patient data in an anonymized and uniform fashion. A participating investigator will then be able to use all data in the DIP. Currently underway is the development of a consensus data template representing the variables and units to be included in the DIP. The NanoString data files also include important analytical parameters (quality control measures, background subtractions, normalization values) in addition to the individual gene expression values, which will also be part of the DIP to allow for standardization across laboratories and thus multicenter analytical validation of any diagnostic assays. The output of this effort is expected to be a robust well‐characterized gene set (presumably a subset of the B‐HOT panel or additional genes) and analytic methodology for interpretation, which will be presented at a subsequent Banff meeting and published. We expect to see correlations with histologic diagnosis (including interpretations not revealed by routine pathology analysis), ongoing immunosuppressive therapy, prediction of outcome, and response to treatment. We (and others, we hope) will follow this by prospective, controlled clinical trials to fully define clinical utility. As a first evaluation, after the Banff meeting, a member of the MDWG, Neal Smith, performed an in silico assessment of the B‐HOT panel genes using the archived Genomic Spatial Event databases from Halloran's group , , that contains 764 kidney biopsy samples with microarray data and diagnostic classification as TCMR, chronic‐active ABMR, mixed, acute kidney injury, no rejection, and normal. Briefly, 3 bioinformatics methods were used to see if they could identify the 6 diagnostic groups from the transcripts: (1) supervised, using diagnostic and pathogenesis based transcripts sets of Halloran; (2) semisupervised, using Nanostring pathways (Data ) plus CIBERSORT cells types; and (3) unsupervised principal component analysis. Results confirmed the correlation of expected gene sets in each analysis with the 6 diagnostic categories (Smith, manuscript in preparation). A description of the initial B‐HOT results in kidney transplants to be presented at the 2020 American Transplant Conference reveals both expected and novel correlations with pathologic categories. The B‐HOT panel will be commercially available for research use only. Whether B‐HOT leads to a clinically indicated laboratory developed test remains to be seen. If it does, it will probably be a simplified panel. In the future, the international, open source, multicenter Banff DIP can serve as a reference point for generating a molecular diagnostic “gold‐standard” in transplantation, similar to the Banff histology lesions and diagnoses agreed upon in 1991. As the Banff consensus rules for histology underwent refinement over the last 28 years as new knowledge emerged, any molecular “consensus” will also need to undergo constant refinement and, no doubt further, technological innovation. Only through integration with clinical decision‐making and end points in clinical trials can the true clinical utility of molecular diagnostics be demonstrated. The authors of this manuscript have conflicts of interest to disclose as described by the American Journal of Transplantation . Michael Mengel received honoraria from Novartis, CSL Behring, Vitaeris. Mark Haas received consulting fees from Shire ViroPharma, AstraZeneca, Novartis, and CareDx, and honoraria from CareDx. Robert Colvin is a consultant for Shire ViroPharma, CSL Behring, Alexion and eGenesis. Candice Roufosse has received consulting fees from Achillion and UCB. Ivy Rosales is a consultant for eGenesis. Enver Akalin received honorarium and research grant support from CareDx. Marian Clahsen‐van Groningen received grant support from Astellas Pharma (paid to the Erasmus MC). A. Jake Demetris receives research support from Q2 Solutions and is a member of an Adjudication Committee for Novartis. None of these conflicts are relevant to this article. The other authors have no conflicts of interest to disclose. None of the authors has a financial interest in NanoString. Supplementary Material Click here for additional data file. Supplementary Material Click here for additional data file. |
Effect of coffee thermal cycling on the surface properties and stainability of additively manufactured denture base resins in different layer thicknesses | 2314dcb5-957a-46ef-ac42-83043ca44dab | 11795347 | Dentistry[mh] | Seven specimens per test group were deemed adequate by a priori power analysis ( f = 0.73, 1‐ β = 95%, α = 0.05) based on the results of a previous study that evaluated the surface roughness and stainability of additively and subtractively manufactured denture base materials. Ten specimens per test group were fabricated to increase the power. Two subtractively (Merz M‐PM; Merz Dental GmbH [SM‐M] and G‐CAM; Graphenano DENTAL [SM‐G]) and three additively (NextDent Denture 3D+; NextDent B.V. [AM‐N], FREEPRINT denture; Detax [AM‐F], and Denturetec; Saremco AG [AM‐S]) manufactured denture base materials were used to fabricate disk‐shaped specimens (Ø 10 mm × 2 mm) (Table ). For additively manufactured specimens, a disk‐shaped standard tessellation language (STL) file was generated (Meshmixer v3.5.474; Autodesk Inc), transferred into nesting software (Composer; Asiga), and positioned with a 45‐degree angle to the build platform. After automatically generating supports, this configuration was duplicated 10 times and the specimens were printed with a layer thickness of either 50 µm (AM‐N‐50, AM‐F‐50, and AM‐S‐50) or 100 µm (AM‐N‐100, AM‐F‐100, and AM‐S‐100) by using a digital light processing printer (Max UV; Asiga). After fabrication, AM‐N specimens were cleaned in an ultrasonic bath containing ethanol for 3 min followed by thorough cleaning in an ultrasonic bath containing fresh ethanol for 2 min. Then, specimens were light‐polymerized by using the manufacturer's proprietary curing unit (NextDent LC‐3DPrint Box; NextDent B.V.) for 30 min at 60°C. AM‐F specimens were cleaned in an ultrasonic bath containing isopropanol for 3 min followed by thorough cleaning in an ultrasonic bath containing fresh isopropanol for 3 min. AM‐S specimens were cleaned by using isopropanol‐soaked cloths until the excess resin was removed. All AM‐F and AM‐S specimens were light‐polymerized by using a xenon‐light polymerization unit (Otoflash G171; NK‐Optik GmHb) for 4000 cycles (2 × 2000). , For subtractively manufactured specimens, a cylinder (Ø 10 mm) was designed in STL format by using the same software, and a 5‐axis milling unit (Milling unit M1; Zirkonzahn GmbH) was used to fabricate cylinders from prepolymerized CAD‐CAM disks. These cylinders were then wet‐sliced into disk‐shaped specimens of desired dimensions by using a precision cutter (Vari/cut VC‐50; Leco Corporation). A non‐contact optical profilometer equipped with a CWL 300 µm sensor (FRT MicroProf 100; Fries Research & Technology GmbH) was used to record R a values with the parameters of 5.5 mm of tracing length, 0.8 mm of cut‐off Lc value, 3 nm of z‐resolution, and a pixel density of 5501 point/line. Six linear traces (three horizontal and three vertical) that were 1 mm apart from each other were measured for each specimen and these values were averaged by using software (Mark III; Fries Research & Technology GmbH). A slurry of coarse pumice in water (Pumice fine; Benco Dental) was used to conventionally polish one surface of all specimens for 90 s at 1500 rpm after initial R a measurements. Fine polishing was performed by using a polishing paste (Fabulustre; Grobet USA) for an additional 90 s. All polishing procedures were performed on a polishing box (Poliereinheit PE5; Degussa AG). All specimens were ultrasonically cleaned in distilled water for 10 min (Eltrosonic Ultracleaner 07–08; Eltrosonic GmbH), dried with paper towels, and R a measurements were repeated. A Vickers hardness tester (M‐400 Hardness Tester; Leco Corp) was used to measure the initial MH values. Each specimen was subjected to a load of 245 mN for 30 s at five different sites that were at least 0.5 mm apart from each other. These values were then averaged to calculate the final MH value of each specimen. After MH measurements, color coordinates (L*, which corresponds to lightness; a*, which corresponds to redness; b*, which corresponds to yellowness) defined by the Commission Internationale de l'éclairage (CIE) were measured over a gray background by using a digital spectrophotometer (CM‐26d; Konica Minolta). This spectrophotometer has an illumination aperture of 8 mm and uses CIE D65 illuminant and the CIE Standard (2‐degree) human observer characteristics in its color estimations. Before each measurement, the spectrophotometer was calibrated in line with the manufacturer's recommendations, and a saturated sucrose solution was used for optical contact between the specimen and the background. All measurements were performed three times in a temperature‐ and humidity‐controlled room with daylight and these readings were averaged. After initial measurements, all specimens were subjected to coffee thermal cycling for 5000 cycles (SD Mechatronik Thermocycler; SD Mechatronik GmbH) at 5°C–55°C with a dwell time of 30 s and a transfer time of 10 s. , A tablespoon of coffee (Intenso Roasted and Grounded; Kaffeehof GmbH) was dissolved in 177 mL of water to prepare the coffee solution, which was freshly made every 12 h by using a coffee machine. , After coffee thermal cycling, coffee extracts were cleaned by gently brushing the specimens 10 times with a toothpaste (Nevadent Mint Fresh; DENTAL‐Kosmetik GmbH) under running water (Figure ). , R a , MH, and color coordinate measurements were repeated after coffee thermal cycling. A single operator (N.W.) performed all experiments and procedures in the present study. The CIEDE2000 formula with parametric factors (k L , k C , and k H ), which are correction terms for variation in experimental conditions, set to one was used to calculate the color differences (ΔE 00 ) among materials: , CIEDE 2000 = Δ L ′ / k L S L 2 + Δ C ′ / k C S C 2 + Δ H ′ / k H S H 2 + R T Δ C ′ / k C S C Δ H ′ / k H S H Δ L ′ / k L S L 2 + Δ C ′ / k C S C 2 1 / 2 Scanning electron microscopy (SEM) (LEO 440; Zeiss) images of one additional sample from all test groups were taken at each time interval (before polishing, after polishing, and after coffee thermal cycling) under ×50 magnification at 20 kV to analyze surface topography after coating the surface of the specimens with gold. Ra and MH data were evaluated by using a linear mixed effect model with material type, time interval, and the interaction between these factors as covariates. One‐way ANOVA was used to analyze the ΔE 00 data. All analyses were performed by using software (SPSS v23; IBM Corp) at a significance level of α = 0.05. In addition, ΔE 00 values were further evaluated by the previously set perceptibility and acceptability thresholds for denture base resins (perceptibility: 1.72, acceptability: 4.08). The descriptive statistics of Ra values of each material‐time interval pair are summarized in Table . The generalized linear model showed that the interaction between the material type and the time interval was effective on both Ra and MH, along with significant effects of material type and time interval as main factors ( p < 0.001). All materials had their highest Ra before polishing ( p ≤ 0.029) and the differences between after‐polishing and after‐coffee thermal cycling values were nonsignificant ( p ≥ 0.814). Before polishing, AM‐F‐100 had the highest Ra ( p < 0.001) that was followed by AM‐N‐100 ( p < 0.001). SM‐M and SM‐G had similar Ra ( p = 0.069) that was lower than those of other groups ( p < 0.001). AM‐N‐50, AM‐S‐100, and AM‐F‐50 had similar Ra ( p ≥ 0.225) that was higher than that of AM‐S‐50 ( p < 0.001). After polishing, SM‐G had lower Ra than all materials ( p ≤ 0.036) other than SM‐M and AM‐S‐50 ( p ≥ 0.121). Every other pairwise comparison was nonsignificant ( p ≥ 0.157). After coffee thermal cycling, SM‐G had lower Ra than all materials ( p ≤ 0.002) other than AM‐N‐50 and AM‐S‐50 ( p ≥ 0.116). In addition, AM‐N‐100 had higher Ra than AM‐F‐50 ( p < 0.001). The remaining pairwise comparisons were nonsignificant ( p ≥ 0.120). After polishing, SM‐G had higher MH than the remaining materials ( p ≤ 0.025) other than AM‐F‐50 ( p = 0.115). AM‐F‐50 and SM‐M had higher MH than AM‐S, AM‐N, and AM‐F‐100 ( p < 0.001). AM‐S‐100 had higher MH than AM‐F‐100, AM‐S‐50, and AM‐N‐50 ( p < 0.001). After coffee thermal cycling, AM‐S‐50, AM‐F‐100, and AM‐N had similar MH ( p ≥ 0.223) that was lower than that of other materials ( p < 0.001). In addition, AM‐F‐50 and SM‐G had higher MH than SM‐M ( p ≤ 0.004). Coffee thermal cycling reduced the MH of SM‐M ( p < 0.001) and increased that of AM‐S‐100 ( p = 0.024). However, it did not affect the MH of the remaining materials ( p ≥ 0.063) (Table ). One‐way ANOVA showed that the differences among ΔE 00 values were significant ( p = 0.019) (Table ). AM‐N‐100 had higher ΔE 00 than all materials ( p ≤ 0.009) other than SM‐M ( p = 0.074), AM‐S‐50 ( p = 0.375), and AM‐N‐50 ( p = 0.462). AM‐S‐50 and AM‐N‐50 had higher ΔE 00 than AM‐F‐100, AM‐F‐50, and SM‐G ( p ≤ 0.024). Every other pairwise comparison was nonsignificant ( p ≥ 0.057). Regardless of the material tested, SEM images before polishing had prominent irregularities. However, the surface of SM‐M and SM‐G was characterized by longitudinal lines, whereas lamellae were dominant on AM‐N and AM‐F specimens. Polishing significantly smoothened the surface of all specimens and pores became visible, while complex small lines were visible on the surfaces after coffee thermal cycling (Figure ). The first null hypothesis of the present study was rejected because the material type and time intervals affected the Ra of tested denture base materials. Regardless of the material, additively manufactured specimens with 100 µm layer thickness had higher Ra, which is in line with the results of a recent study that reported a similar trend when the specimens were fabricated at a 45‐degree angle. However, the differences in Ra within each material were nonsignificant after polishing and coffee thermal cycling. Before polishing, SM‐M and SM‐G had the lowest Ra, which could be associated with the fact that these specimens were fabricated by using prepolymerized PMMA disks that have lower residual monomer content and a higher degree of polymerization due to being fabricated under high pressure and high temperature. However, none of the test groups had Ra values that were either similar to or lower than the acceptable threshold of 0.2 µm before polishing. In line with previous studies, polishing significantly reduced these values, , , , , whereas coffee thermal cycling did not have a significant effect. When initial Ra values are excluded, the greatest difference with the clinically acceptable threshold belonged to AM‐N‐50 after polishing (0.30 µm). Considering that a quantitative difference of 0.1 µm is rather low, the authors believe this difference could be negligible. Nevertheless, this hypothesis needs to be corroborated by studies investigating the bacterial plaque accumulation on these surfaces. Recent studies have also compared the Ra values of denture base materials tested in the present study. , In one of those studies, SM‐M, SM‐G, and AM‐F were evaluated, and Çakmak et al. showed that AM‐F had lower Ra than SM‐M after polishing. The layer thickness of AM‐F specimens was not disclosed by Çakmak et al. which complicates a direct comparison between studies. The other study investigated how different cleansing methods affected the Ra of SM‐M, SM‐G, AM‐N‐50, and AM‐S‐50, but also involved polishing's effect on Ra as a factor. The authors concluded that the Ra of test groups were similar after polishing, which contradicts the results of the present study. However, 30 specimens per group were tested in that study and the increased number of specimens may affect the results of this study, given the high standard deviation values of SM‐M, AM‐N‐50, and AM‐S‐50 when their mean Ra was considered (Table ). While coffee thermal cycling did not affect the Ra values in the present study, previous studies contradict this finding. , Alp et al. evaluated the Ra of three subtractively, including SM‐M, and one conventionally manufactured denture base material, and showed that coffee thermocycling increased the Ra of all materials. In the other study mentioned above, the effect of coffee thermal cycling differed according to tested material and SM‐G was shown to have the lowest Ra after coffee thermal cycling. These contradicting results highlight the need for future studies involving longer durations of coffee thermal cycling on the Ra of additively and subtractively manufactured denture base materials as in all studies the specimens were subjected to 5000 cycles. SEM images are parallel with the Ra results as all materials had the most irregular surfaces before polishing. The difference in the topography of the surfaces before polishing could be associated with the differences in manufacturing methods. Prominent longitudinal lines visible on the surface of SM‐M and SM‐G are possibly due to milling instruments and precision cutter, while the layer‐by‐layer manufacturing principle is clearly visible in the SEM images of additively manufactured specimens, particularly those of AM‐N and AM‐F. In addition, the higher number of layers of specimens with 50 µm layer thickness compared with those of 100 µm layer thickness is also apparent. Even though polishing led to smoother surfaces for all specimens, AM‐N‐50 had a higher number of pores than AM‐N‐100, which might be associated with the nonsignificantly higher Ra values after polishing. When after‐coffee thermal cycling SEM images were considered, AM‐N‐100 had more prominent surface deterioration, which might have been caused by the absorption of water, when compared with AM‐N‐50. Increased layer thickness might have diminished the bond between consecutive layers during the fabrication of specimens with 100 µm layer thickness and led to the nonsignificant increase in their Ra. Regardless of time interval, SM‐G mostly had higher MH values than those of other materials and coffee thermal cycling reduced the MH of SM‐M and increased that of AM‐S‐100. Therefore, the second null hypothesis was rejected. When additively manufactured materials were further evaluated, AM‐F‐50 and AM‐S‐100 had higher MH than their counterparts in both time intervals, which could be interpreted as the effect of layer thickness on MH being material‐dependent as tested materials have different chemical compositions. These results contradict those of Lee et al., which showed that AM‐N‐50 had higher MH than AM‐N‐100. However, the specimens were polished by using a different methodology. and no thermal aging was performed. In addition, the MH of the other tested resin was not affected by the layer thickness, which corroborates the findings of the present study and the hypothesis on the effect of layer thickness on MH being material‐dependent. Even though the chemical composition of SM‐G is not disclosed by its manufacturer, it is the only nanographene‐reinforced material, which could be associated with its high MH values. The trend of high MH values was also observed with SM‐M, and standardized polymerization of subtractively manufactured denture base materials may again be related to this finding. Additively manufactured specimens mostly had lower MH values than those of subtractively manufactured specimens, which could be related to the residual monomers that negatively affect mechanical properties as these materials were post polymerized after fabrication. In addition, after coffee thermal cycling MH values of SM‐M were mostly higher than those of additively manufactured specimens after polishing. Therefore, it can be speculated that other than AM‐F‐50 and AM‐S‐100, tested additively manufactured denture base resins could be more prone to deformation intraorally even after SM‐M was subjected to long‐term consumption of coffee and intraoral thermal changes. It should also be noted that the decrease in the MH of SM‐M may be clinically negligible given that a universally accepted threshold value for hardness is not available. Also, longer exposures to thermal stresses might also diminish the MH of tested additively manufactured denture base resins. Studies comparing the MH of additively manufactured denture base resins with other denture base materials are limited , , , , , , and only three of those studies involved subtractively manufactured denture base resins. , , In a recent study, Çakmak et al. concluded that AM‐S had lower MH than SM‐G after thermal cycling. In addition, thermal cycling decreased the MH of SM‐G in that study. However, a direct comparison between the present and Çakmak et al. may be misleading, given that coffee was not involved in that study and the layer thickness of AM‐S was not disclosed. Even though no aging was performed in their study, Fouda et al. have also reported parallel results to those of the present study. The authors concluded that subtractively manufactured denture base materials had higher MH than additively manufactured denture base materials, which also involved AM‐N‐50. However, the MH values of AM‐N‐50 in the present study were higher than those in Fouda et al., which may be related to the differences in the printer used and the printing orientation. Prpić et al. stated that the additively manufactured denture base resin had higher Brinell's hardness than one of the tested subtractively manufactured denture base resins without any thermal cycling. The third null hypothesis was rejected as significant differences in ΔE 00 values were observed among tested materials. However, the differences between specimens fabricated with different layer thicknesses were nonsignificant within each additively manufactured material. In addition, among the test groups, only AM‐N‐100 had a mean perceptible color change as it had a slightly higher ΔE 00 value (1.76) than the perceptibility threshold (1.72) ; thus, all materials had acceptable color stability. A recent study has investigated the stainability of a tooth‐colored additively manufactured resin when fabricated by using different layer thicknesses. The authors have concluded that specimens fabricated with 25 µm‐thick layers had significantly lower ΔE 00 values than those fabricated with 100 µm‐thick layers, regardless of the immersion medium and duration. This contradiction between the present and Lee et al. may be associated with the differences in the resins tested. A recent study has also evaluated the stainability of SM‐M, SM‐G, and AM‐F after coffee thermal cycling. The authors concluded that only AM‐F had perceptible color change, while SM‐M and SM‐G had imperceptible color change. A limitation of the present study was that only two‐layer thicknesses were compared. However, it is possible to change the layer thickness between 25 and 150 µm while using a photopolymerization 3D printer. In addition, additively manufactured specimens were fabricated with a standardized printing orientation. However, other studies have reported fabricating additively manufactured specimens with different angles and orientations may affect the results. , , Even though all additively manufactured specimens were fabricated by using a standardized printer that was also used in previous studies, , different printers with the same or different technologies may lead to different results. Subtractively manufactured specimens were wet‐sliced by using a precision cutter after milling cylinder‐shaped specimens from prepolymerized disks. This was deliberately preferred to limit the amount of excess material that would be generated to fabricate the specimens. However, this methodology does not replicate actual clinical situations and may be considered as a limitation. Coffee thermal cycling might have exacerbated the results of the present study as only polished surfaces of dentures are in contact with staining solutions clinically. Given that the main of the present study was to evaluate the effect of printing layer thickness on different properties of additively manufactured denture base materials, only one staining solution was used. Nevertheless, different staining solutions may lead to different results and the possible effect of saliva was not simulated in the thermal cycling setup. Finally, a conventional polishing procedure was performed in the present study to evaluate the polishability of tested materials as a secondary outcome. However, the efficiency of polishing may be affected by the operator or the polishing method. Even though the results of the present study indicated that tested subtractively manufactured denture base resin was more stable in terms of tested parameters and may be more resistant to mechanical and esthetic complications when compared with tested additively manufactured denture base resin, future studies should elaborate the results of the present study by testing other mechanical and optical properties of additively manufactured denture base resins with different layer thicknesses after being subjected to longer durations of aging or other possible stresses such as brushing and chemical disinfection to broaden the knowledge on their limitations. Polishing reduced the surface roughness of all materials significantly, whereas coffee thermal cycling did not significantly affect the surface roughness. Layer thickness of tested additively manufactured resins only affected the microhardness values as 50 µm layer thickness mostly led to higher microhardness. SM‐G mostly had higher microhardness, regardless of time interval. Coffee thermal cycling decreased the microhardness of SM‐M and increased that of AM‐S‐100. All materials had acceptable color stability. However, AM‐N‐100's color change after coffee thermal cycling was perceptible, considering published thresholds. Considering these results, tested nanographene‐reinforced PMMA may be the favorable material in the long‐term for the fabrication of removable dentures among those tested, while the preference of layer thickness should be made according to the additively manufactured denture base resin. The authors declare no conflict of interest. The authors do not have any financial interest in the companies whose materials are included in this article. |
Characterization of the exopolysaccharides produced by the industrial yeast | a31d3987-280a-4bf7-8909-923ad9cf7141 | 11630240 | Microbiology[mh] | The industrial yeast, Komagataella phaffii (K. phaffii) , is renowned for its application in developing recombinant proteins across diverse product categories from industrial enzymes to therapeutic antibodies. More recently, biotechnology start-up companies have successfully employed K. phaffii for the production of recombinant proteins with a range of applications in food from texture, flavor, smell, nutritional content, and health benefits (Barone et al., ). Examples of some of these products and companies include heme proteins (Motif FoodWorks and Impossible Foods), egg white proteins (The Every Company), milk proteins (Perfect Day and Remilk), and sweet proteins (Oobli) . TurtleTree is producing lactoferrin as a food ingredient in select food and beverage applications using K. phaffii as a production host. Lactoferrin is an iron-binding glycoprotein naturally occurring in mammalian mucosal secretions (i.e., milk and saliva) and in neutrophil granules. Apart from its main biological function, namely binding and transporting iron ions, lactoferrin also has antibacterial, antiviral, antiparasitic, anticancer, and antiallergic functions and properties (García-Montoya et al., ). Bovine lactoferrin is a 689-amino acid glycoprotein with a molecular weight of 80–87 kDa (Hurley et al., ). Native bovine lactoferrin (bLf) has five putative N -linked glycosylation sites and is present in bovine milk in different glycoforms, of which the most abundant glycoform has four of the sites occupied with heterogeneous glycan structures (Wei et al., ). Despite the advantages of this platform host and the favorable qualities of the products it can produce, secreted proteins produced from these strains can be plagued by the presence of unwanted polysaccharides, known as exopolysaccharides (EPS) (Denton et al., ; Pan et al., , Trimble et al., ). During high cell density fed-batch fermentation, K. phaffii strains have been known to produce EPS at concentrations up to 8.7 g/L in the supernatant (Steimann et al., ). To effectively remove these polysaccharides, additional downstream processing (DSP) steps, such as chromatography, are necessary (Li et al., ; O'Leary et al., ). Unfortunately, these additional DSP procedures are both costly and time-consuming for manufacturers. The research regarding the origins of these carbohydrates and the influences of manufacturing processes remains largely unexplored. Steimann et al. summarize the existing knowledge of EPS, and emphasize that beyond the detection of EPS noted in some literature and basic structural information, there lacks a concrete understanding of this common and undesired coproduct. The industry needs a solution for increasing product purity in a cost-effective and efficient manner. The objective of this study was to deepen our understanding of the composition and source of the EPS produced by K. phaffii during fermentation. Characterization of the EPS molecular weight and structure is necessary to ensure product safety and inform DSP methods to achieve higher purity products. Exploring how host lineage and the stress of recombinant protein production influence polysaccharide formation can help support strain or process engineering techniques to reduce or eliminate EPS production or accumulation. In addition, this study reports on the development and implementation of methods to monitor and track EPS levels during upstream and downstream process optimization for EPS reduction. The information provided in this study serves as a starting point for streamlining purification and manufacturing methods to achieve high purity products with low-cost DSP for K. phaffii . Strains Komagataella phaffii strains BG10, YB-4290, YB-4290.TT_bLf, BG11.2_cutinase (BG10, ΔAox1, extra copy of Hac1, cutinase), and YB-4290.2_cutinase (YB-4290, ΔAox1, extra copy of Hac1, cutinase). Fed-Batch Fermentations All fed-batch fermentations were carried out using 2-L glass vessels equipped with inline control of temperature, pH, feed rate, dissolved oxygen, and mixing. Depending on the strain, the carbon source, pH, and duration of the fermentation varied. For all strains, the production phase was set to target a low growth rate similar to what is described in literature (Life Technologies Invitrogen, ; Looser et al., ). Chemically defined FM22 media and PTM salts were used for all fermentations. Total Carbohydrate Assay The anthrone–sulfuric acid microplate assay adapted from Leyva et al. was used to determine total carbohydrate mass. The Sigma product M2069-5 G, d- -mannose was used to prepare a standard curve. Carbohydrate Linkage, Monosaccharide, and Polysaccharide Composition Analysis The following were carried out by the Lebrilla Lab at the University of California, Davis, as described in previous publications (Bacalzo et al., ; Couture et al., ; Galermo et al., ; Xu et al., ). The carbohydrate glycosidic linkage analysis and monosaccharide analysis were performed using ultra high-performance liquid chromatography coupled with triple-quadrupole mass spectrometry (UHPLC/QqQ-MS). The polysaccharide analysis was carried out using high-performance liquid chromatography–quadrupole time-of-flight mass spectrometry (HPLC/QTOF-MS). The following sets of standards were used for quantification and identifications. Linkage analysis standards included a library of 22 glycosidic linkages prepared using commercial oligosaccharide standards. Polysaccharide analysis standards included a set of 13 polysaccharides commonly found in food. Monosaccharide analysis standards included a pooled set of standards containing the 15 most common monosaccharides found in food. Carbohydrate Molecular Weight Analysis Gel permeation chromatography (GPC) was carried out by McGill University, Department of Chemistry. A Shodex LB-806 M OH pak column and a refractive index detector were used for detection. A pullulan polyS polymer was used to generate a standard curve (Shodex) (Arnling Bååth et al., ; Čížová et al., ; Verhertbruggen et al., ). Total Composition Analysis of Low Purity rbLf Sample Eurofins performed the following assays on the rbLf sample: ash (Eurofins method code FS044-1), total protein (Eurofins method code FSO4U-3), and bovine lactoferrin (Eurofins method code FS313-1). Moisture content was determined by drying and weighing. In this case, the total percent carbohydrate by weight was calculated by subtracting the mass of moisture, ash, and total protein from the total weight and then dividing by total dry weight. Bovine Lactoferrin Quantification By HPLC An Agilent 1260 HPLC, equipped with a Agilent Poroshell 300SB-C8, 5-um column, and an Agilent Multi Wavelength Detector (220 nm), was used for rbLf sample resolution and detection. Bovine lactoferrin derived from milk (Sigma L047) was used to prepare a standard curve. Komagataella phaffii strains BG10, YB-4290, YB-4290.TT_bLf, BG11.2_cutinase (BG10, ΔAox1, extra copy of Hac1, cutinase), and YB-4290.2_cutinase (YB-4290, ΔAox1, extra copy of Hac1, cutinase). All fed-batch fermentations were carried out using 2-L glass vessels equipped with inline control of temperature, pH, feed rate, dissolved oxygen, and mixing. Depending on the strain, the carbon source, pH, and duration of the fermentation varied. For all strains, the production phase was set to target a low growth rate similar to what is described in literature (Life Technologies Invitrogen, ; Looser et al., ). Chemically defined FM22 media and PTM salts were used for all fermentations. The anthrone–sulfuric acid microplate assay adapted from Leyva et al. was used to determine total carbohydrate mass. The Sigma product M2069-5 G, d- -mannose was used to prepare a standard curve. The following were carried out by the Lebrilla Lab at the University of California, Davis, as described in previous publications (Bacalzo et al., ; Couture et al., ; Galermo et al., ; Xu et al., ). The carbohydrate glycosidic linkage analysis and monosaccharide analysis were performed using ultra high-performance liquid chromatography coupled with triple-quadrupole mass spectrometry (UHPLC/QqQ-MS). The polysaccharide analysis was carried out using high-performance liquid chromatography–quadrupole time-of-flight mass spectrometry (HPLC/QTOF-MS). The following sets of standards were used for quantification and identifications. Linkage analysis standards included a library of 22 glycosidic linkages prepared using commercial oligosaccharide standards. Polysaccharide analysis standards included a set of 13 polysaccharides commonly found in food. Monosaccharide analysis standards included a pooled set of standards containing the 15 most common monosaccharides found in food. Gel permeation chromatography (GPC) was carried out by McGill University, Department of Chemistry. A Shodex LB-806 M OH pak column and a refractive index detector were used for detection. A pullulan polyS polymer was used to generate a standard curve (Shodex) (Arnling Bååth et al., ; Čížová et al., ; Verhertbruggen et al., ). Eurofins performed the following assays on the rbLf sample: ash (Eurofins method code FS044-1), total protein (Eurofins method code FSO4U-3), and bovine lactoferrin (Eurofins method code FS313-1). Moisture content was determined by drying and weighing. In this case, the total percent carbohydrate by weight was calculated by subtracting the mass of moisture, ash, and total protein from the total weight and then dividing by total dry weight. An Agilent 1260 HPLC, equipped with a Agilent Poroshell 300SB-C8, 5-um column, and an Agilent Multi Wavelength Detector (220 nm), was used for rbLf sample resolution and detection. Bovine lactoferrin derived from milk (Sigma L047) was used to prepare a standard curve. Komagataella Phaffii Produces High-Molecular-Weight, Mannose Exopolysaccharides In this study, the K. phaffii host strain YB-4290 was used to overexpress and secrete rbLf (YB-4290.TT_bLf). When the rbLf is purified from the fermentation broth using standard filtration-based DSP methods followed by drying, the protein content is predominantly full-length rbL; however, percent total protein by weight is low. As much as 70% of the dry preparation by weight is carbohydrates with only minor portions of moisture and ash (Fig. ). The low purity rbLf powder sample was analyzed with mass spectrometry (MS) to characterize the carbohydrate content (Bacalzo et al., ; Xu et al., ). The MS method indicated less carbohydrate (41%) than the batch record calculation (70%), although both methods revealed a high proportion of carbohydrate in the product (Fig. ). While the MS method is accurate to analyze the ratio of different structures, it may include loss of total product in the sample preparation steps leading up to MS analysis or lack of standards. The carbohydrates were analyzed with or without hydrolysis to characterize either the total or free monosaccharide composition, respectively. The results show a very small amount of free monosaccharides (<0.02%), while some of the carbohydrate is present as polysaccharides (5.7%) (Fig. ). The identities of the polysaccharides were determined by comparing their fingerprint chromatograms to that of 13 known polysaccharide standards. The major polysaccharides in the rbLf sample are mannan, galactomannan, and amylose (Fig. ). About one third of the total carbohydrate mass could not be classified as either free monosaccharides or polysaccharides. This fraction is called the “other carbohydrates” in Fig. and likely consists of mannose-based oligosaccharides with a range in degree of polymerization. The molecular weight of the polysaccharides in the rbLf sample was analyzed using GPC, size-exclusion chromatography, which indicates their average molecular weight is 50 kDa (Fig. ). Because the polysaccharide fraction is similar in molecular weight to the lactoferrin, the two remain together during standard membrane-based DSP filtration steps designed to target the 84-kDa lactoferrin protein. Exopolysaccharide Production Trends in Fermentation From the Recombinant Bovine Lactoferrin Strain The strain YB-4290.TT_bLf was cultivated in 2-L fed-batch fermentation using glucose as a carbon source and a pH shift from 4.8 to 5.2 on day 3. Under these conditions, the dry cell weight (DCW) peaks near day 3 when the production phase starts and remains relatively level throughout the rest of the fermentation (Fig. ). In contrast, the recombinant protein titer increases at a near linear rate during the production phase. The EPS production dynamics appear to mirror recombinant protein production and not biomass accumulation (Fig. ). No obvious signs of lysis were observed during fermentation, and glycans from the rbLf itself would expect to make a minor contribution to the total carbohydrates measured. In some cases, the titer of carbohydrates was six times greater than the amount of recombinant lactoferrin produced. Exopolysaccharide Production Trends in Fermentation From Different Host Strains Producing Cutinase Additional strains were built in order to test the impact of host strain lineage and expression of a different, non- N -linked glycosylated, recombinant protein on EPS production. For this study, the well-expressed reporter protein Fusarium solani cutinase (cut) (Uniprot ID Q99174) was expressed in the two most commonly used lineages of K. phaffii , BG10 (closely related to Y-11430) and YB-4290 (closely related to Y-7556) (Table ). This also serves as a control for the bovine lactoferrin strain built using the YB-4290 host. In these strains, cutinase expression is controlled using the native methanol inducible promoter from the Aox1 gene. The host strains BG10 and YB-4290 were grown using glucose as a carbon source. The strain YB-4290 consistently produced more total carbohydrate than BG10. At the end of fermentation, YB-4290 and BG10 had similar biomass DCW but YB-4290 produced significantly more total carbohydrate (Table and ). NRRL, the Northern Regional Research Laboratory (aka National Center For Agricultural Utilization Research); ATUM, atum.bio.; WT , wildtype. The cutinase versions of these strains, BG11.2_cutinase and YB-4290.2_cutinase, were tested in fermentation using glucose as a carbon source during the batch phase and switching to methanol during the production phase for induction of cutinase expression. The two cutinase strains from the different host backgrounds secrete significantly different levels of the recombinant protein and the YB-4290.2_cut strain produces close to double the amount of cutinase and also twice the amount of EPS as the BG11.2_cut strain (Table and ). Although both cutinase strains have elevated EPS production compared to the host strains, seemingly stimulated by recombinant protein production, it is difficult to say whether the total amount of EPS produced is due to differences in the recombinant protein titer or differences in the host strain backgrounds. As expected, when the carbohydrate composition of these broths was characterized by electrophoresis and carbohydrate staining, the YB-4290 and YB-4290.2_cutainase strains showed much stronger staining than the BG10 and BG11.2_cutinase strains. Produces High-Molecular-Weight, Mannose Exopolysaccharides In this study, the K. phaffii host strain YB-4290 was used to overexpress and secrete rbLf (YB-4290.TT_bLf). When the rbLf is purified from the fermentation broth using standard filtration-based DSP methods followed by drying, the protein content is predominantly full-length rbL; however, percent total protein by weight is low. As much as 70% of the dry preparation by weight is carbohydrates with only minor portions of moisture and ash (Fig. ). The low purity rbLf powder sample was analyzed with mass spectrometry (MS) to characterize the carbohydrate content (Bacalzo et al., ; Xu et al., ). The MS method indicated less carbohydrate (41%) than the batch record calculation (70%), although both methods revealed a high proportion of carbohydrate in the product (Fig. ). While the MS method is accurate to analyze the ratio of different structures, it may include loss of total product in the sample preparation steps leading up to MS analysis or lack of standards. The carbohydrates were analyzed with or without hydrolysis to characterize either the total or free monosaccharide composition, respectively. The results show a very small amount of free monosaccharides (<0.02%), while some of the carbohydrate is present as polysaccharides (5.7%) (Fig. ). The identities of the polysaccharides were determined by comparing their fingerprint chromatograms to that of 13 known polysaccharide standards. The major polysaccharides in the rbLf sample are mannan, galactomannan, and amylose (Fig. ). About one third of the total carbohydrate mass could not be classified as either free monosaccharides or polysaccharides. This fraction is called the “other carbohydrates” in Fig. and likely consists of mannose-based oligosaccharides with a range in degree of polymerization. The molecular weight of the polysaccharides in the rbLf sample was analyzed using GPC, size-exclusion chromatography, which indicates their average molecular weight is 50 kDa (Fig. ). Because the polysaccharide fraction is similar in molecular weight to the lactoferrin, the two remain together during standard membrane-based DSP filtration steps designed to target the 84-kDa lactoferrin protein. The strain YB-4290.TT_bLf was cultivated in 2-L fed-batch fermentation using glucose as a carbon source and a pH shift from 4.8 to 5.2 on day 3. Under these conditions, the dry cell weight (DCW) peaks near day 3 when the production phase starts and remains relatively level throughout the rest of the fermentation (Fig. ). In contrast, the recombinant protein titer increases at a near linear rate during the production phase. The EPS production dynamics appear to mirror recombinant protein production and not biomass accumulation (Fig. ). No obvious signs of lysis were observed during fermentation, and glycans from the rbLf itself would expect to make a minor contribution to the total carbohydrates measured. In some cases, the titer of carbohydrates was six times greater than the amount of recombinant lactoferrin produced. Additional strains were built in order to test the impact of host strain lineage and expression of a different, non- N -linked glycosylated, recombinant protein on EPS production. For this study, the well-expressed reporter protein Fusarium solani cutinase (cut) (Uniprot ID Q99174) was expressed in the two most commonly used lineages of K. phaffii , BG10 (closely related to Y-11430) and YB-4290 (closely related to Y-7556) (Table ). This also serves as a control for the bovine lactoferrin strain built using the YB-4290 host. In these strains, cutinase expression is controlled using the native methanol inducible promoter from the Aox1 gene. The host strains BG10 and YB-4290 were grown using glucose as a carbon source. The strain YB-4290 consistently produced more total carbohydrate than BG10. At the end of fermentation, YB-4290 and BG10 had similar biomass DCW but YB-4290 produced significantly more total carbohydrate (Table and ). NRRL, the Northern Regional Research Laboratory (aka National Center For Agricultural Utilization Research); ATUM, atum.bio.; WT , wildtype. The cutinase versions of these strains, BG11.2_cutinase and YB-4290.2_cutinase, were tested in fermentation using glucose as a carbon source during the batch phase and switching to methanol during the production phase for induction of cutinase expression. The two cutinase strains from the different host backgrounds secrete significantly different levels of the recombinant protein and the YB-4290.2_cut strain produces close to double the amount of cutinase and also twice the amount of EPS as the BG11.2_cut strain (Table and ). Although both cutinase strains have elevated EPS production compared to the host strains, seemingly stimulated by recombinant protein production, it is difficult to say whether the total amount of EPS produced is due to differences in the recombinant protein titer or differences in the host strain backgrounds. As expected, when the carbohydrate composition of these broths was characterized by electrophoresis and carbohydrate staining, the YB-4290 and YB-4290.2_cutainase strains showed much stronger staining than the BG10 and BG11.2_cutinase strains. EPS Characterization This study confirms the observation that K. phaffii produces an abundance of EPS that accumulate during high-cell-density fed-batch fermentation and further characterizes the composition, structure, and size of these EPS (Steimann et al., ). In addition to what has been reported in this study, future work is needed to determine the composition of the uncharacterized carbohydrate fraction. This could include GPC separation or another size-exclusion method. These isolated fractions could then be further characterized by multiglycomic methods to confirm they are indeed mannose oligosaccharides. EPS Linked to Recombinant Protein Production and YB-4290 Lineage In contrast to Steimann et al., this study reports a very different effect of recombinant protein production and host lineage on the EPS production. The host strains, BG10 and YB-4290, produce a basal level of EPS that accumulates to 10–12 mg/ml by the end of fermentation; YB-4290 producing slightly more than BG10. In both host backgrounds, YB-4290 and BG10, overexpression of recombinant protein, either cutinase or rbLf, leads to significantly higher EPS levels compared to the host strains not overexpressing recombinant protein (Table ). In addition, EPS production dynamics appears to mirror recombinant protein production and not biomass accumulation. This interesting observation that overexpression of recombinant protein stimulates EPS production is surprising and in contrast to the findings of Steimann et al. In comparison to the rbLf strain, the cutinase strains secrete significantly less recombinant protein; however, there is no obvious correlation between recombinant protein titer and total amount of EPS produced. Genetic Influence Although these experiments suggest a positive correlation between recombinant protein production and EPS production, it is still unclear the impact of the host on total EPS productivity or the genetic mechanisms for EPS production. However, it is interesting to speculate that the host genotype could contribute to the EPS phenotype. The two host strain lineages used in this study were previously reported to contain single nucleotide polymorphisms (SNPs) at the Hoc1, Rsf2 and Sef1 loci; the Hoc1 and Rsf2 SNPs both result in truncations (Brady et al., ; Claes et al., ; Offei et al., ) (Table ). Hoc1 is an α-1,6-mannosyltransferase involved in cell wall mannan biosynthesis, and the truncation has been correlated with the thin cell wall phenotype of the BG10 lineage (Offei et al., ). Rsf2 is a putative zinc finger transcription factor, responsible for controlling genes required for glycerol-based growth, respiration, cellular morphogenesis, and alcohol metabolism and cell wall remodeling (Brady et al., ). It is possible that one or both of these truncations in BG10 lead to loss of function, resulting in a thin cell wall phenotype and the lower levels of EPS secretion observed in this report. Conversely, the wildtype version of these proteins in the YB-4290 lineage could result in the thicker cell wall along with the hyperaccumulation of EPS observed. Future work is required to test the linkage between host genotype and the cell wall thickness phenotype. Source of EPS Research with the model yeast Saccharomyces cerevisiae demonstrated that the cell wall is primarily composed of polysaccharide (68%–75% by dry weight), and is evenly balanced between glucan and mannan structures (Baek et al., ; Klis et al., ; Kogan & Kocher, ; Roelofsen, ). The glucans are synthesized at the cell wall whereas the mannans are added to the cell wall proteins in the secretory pathway. These mannosylated proteins are then transported to the cell wall in the same secretory vesicles used for recombinant protein secretion. The cotransport of both the EPS and recombinant protein could explain the correlation between production dynamics in fermentation as well as the significant stimulation of EPS in strains overexpressing recombinant protein. In contrast to cell wall polysaccharides enriched in glucose, the EPS are primarily mannose-based, likely originating from the cell wall mannoproteins. However, it is surprising that yeast would produce such an abundance of soluble mannans not associated with the cell wall. Stress from recombinant protein production or the unfolded protein response may disrupt the mannosylation pathway, leading to an overload of secretory vesicles with free mannans. Strains with truncations in the Hoc1 and Rsf2 may have impaired mannosylation, resulting in lower levels of secreted EPS. This study confirms the observation that K. phaffii produces an abundance of EPS that accumulate during high-cell-density fed-batch fermentation and further characterizes the composition, structure, and size of these EPS (Steimann et al., ). In addition to what has been reported in this study, future work is needed to determine the composition of the uncharacterized carbohydrate fraction. This could include GPC separation or another size-exclusion method. These isolated fractions could then be further characterized by multiglycomic methods to confirm they are indeed mannose oligosaccharides. In contrast to Steimann et al., this study reports a very different effect of recombinant protein production and host lineage on the EPS production. The host strains, BG10 and YB-4290, produce a basal level of EPS that accumulates to 10–12 mg/ml by the end of fermentation; YB-4290 producing slightly more than BG10. In both host backgrounds, YB-4290 and BG10, overexpression of recombinant protein, either cutinase or rbLf, leads to significantly higher EPS levels compared to the host strains not overexpressing recombinant protein (Table ). In addition, EPS production dynamics appears to mirror recombinant protein production and not biomass accumulation. This interesting observation that overexpression of recombinant protein stimulates EPS production is surprising and in contrast to the findings of Steimann et al. In comparison to the rbLf strain, the cutinase strains secrete significantly less recombinant protein; however, there is no obvious correlation between recombinant protein titer and total amount of EPS produced. Although these experiments suggest a positive correlation between recombinant protein production and EPS production, it is still unclear the impact of the host on total EPS productivity or the genetic mechanisms for EPS production. However, it is interesting to speculate that the host genotype could contribute to the EPS phenotype. The two host strain lineages used in this study were previously reported to contain single nucleotide polymorphisms (SNPs) at the Hoc1, Rsf2 and Sef1 loci; the Hoc1 and Rsf2 SNPs both result in truncations (Brady et al., ; Claes et al., ; Offei et al., ) (Table ). Hoc1 is an α-1,6-mannosyltransferase involved in cell wall mannan biosynthesis, and the truncation has been correlated with the thin cell wall phenotype of the BG10 lineage (Offei et al., ). Rsf2 is a putative zinc finger transcription factor, responsible for controlling genes required for glycerol-based growth, respiration, cellular morphogenesis, and alcohol metabolism and cell wall remodeling (Brady et al., ). It is possible that one or both of these truncations in BG10 lead to loss of function, resulting in a thin cell wall phenotype and the lower levels of EPS secretion observed in this report. Conversely, the wildtype version of these proteins in the YB-4290 lineage could result in the thicker cell wall along with the hyperaccumulation of EPS observed. Future work is required to test the linkage between host genotype and the cell wall thickness phenotype. Research with the model yeast Saccharomyces cerevisiae demonstrated that the cell wall is primarily composed of polysaccharide (68%–75% by dry weight), and is evenly balanced between glucan and mannan structures (Baek et al., ; Klis et al., ; Kogan & Kocher, ; Roelofsen, ). The glucans are synthesized at the cell wall whereas the mannans are added to the cell wall proteins in the secretory pathway. These mannosylated proteins are then transported to the cell wall in the same secretory vesicles used for recombinant protein secretion. The cotransport of both the EPS and recombinant protein could explain the correlation between production dynamics in fermentation as well as the significant stimulation of EPS in strains overexpressing recombinant protein. In contrast to cell wall polysaccharides enriched in glucose, the EPS are primarily mannose-based, likely originating from the cell wall mannoproteins. However, it is surprising that yeast would produce such an abundance of soluble mannans not associated with the cell wall. Stress from recombinant protein production or the unfolded protein response may disrupt the mannosylation pathway, leading to an overload of secretory vesicles with free mannans. Strains with truncations in the Hoc1 and Rsf2 may have impaired mannosylation, resulting in lower levels of secreted EPS. kuae046_Supplemental_Files |
Study of secondary dentine deposition in central incisors as an age estimation method for adults | 515ae872-f74e-434c-9129-f64bcb149515 | 11790684 | Dentistry[mh] | Age is one of the four major biological profile characteristics used to establish individual identification, with a growing essential role in the forensics . The challenge of age estimation is higher in adults than in children since dental and skeletal growth is settled, and there is an increase in complexity as degenerative processes appear in adulthood . The most used indicators of chronological age achievement rely on skeletal and dental evaluations, considering the influence of environmental factors , ethnic and sexual variability , and a secular trend . For age estimation, the Forensic Anthropology Society of Europe preconizes a radiograph of clavicle-sternal fusion, a dental study with pulp chamber methods, and a physical examination, including hormonal dosage for women. There is broad consensus that tooth assessment, relying on the dental age-related phenomena, is more predictable than the other two . While comparing the radiation doses of the radiographs advocated for forensic proposals with natural and civilizing radiation exposures, it is accepted that the health risk of damage is diminutive. Exposure must respect the legislation of each country, recognizing the disparities between countries concerning purposes other than medical reasons. It is seen as a social and individual benefit in the majority, while radiation is taken for legal procedures. The imaging procedure must be performed under informed consent, including the purpose and examination type. On edge, images could be acquired from archives. Dental age prediction in adults can rely on several methods, namely Gustafson’s parameters , dentinal translucency , and cementum annulations . Recent developments in biochemistry have allowed exact age estimation . However, these techniques require extraction of teeth and, usually, tooth sectioning/processing, which may not be feasible in living adults or in certain jurisdictions that prohibit tissue collection from human remains. In 1925, Bodecker was the first author to recognize a correlation of dentine apposition with chronological age . Secondary dentine apposition and pulp chamber narrowing since adulthood are well-recognized age indicators . After tooth full eruption, the apical closure is essential to begin secondary dentine secretion . In the meantime, the pulp area decreases . In 2004, Cameriere et al. introduced the pulp/tooth area ratio (PTAR) technique, measuring whole pulp and tooth areas and applying concrete age estimation statistical analysis, considering linear regression models . This method, measuring the upper right canines in orthopantomograms (OPGs), has obtained high levels of accuracy in age prediction and included the effect of population affinity and culture on statistical formulation . It led to a simple and objective age estimation metric method, recommended for adults and individuals nearly to adulthood without third molars . Yet, some authors have claimed that PTAR models must be population-specific . Later, in 2013, Cameriere et al. developed a model using peri-apical digital X-rays of both upper lateral and central incisors. The total variance explained by the model developed was the following: (a) 51.3% in lower lateral incisors, (b) 56.5% in lower central incisors, (c) 80.3% in upper central incisors, and (d) 81.6% in upper lateral incisors. The developed models were not tested in independent samples . Furthermore, the study was carried out in peri-apical X-rays from skulls, which may not reproduce entirely the actual context and allow only visualization of a few teeth. This can be troublesome if the tooth the investigator was planning to assess cannot be evaluated (because it has a root canal treatment, for example) and might require further radiographs to be performed. Recently, doubts have arisen about the ethics of these procedures , and it is a good practice to perform as few radiographs as possible . Thus, an obvious advantage is using an orthopantomogram, which allows for multiple age estimation techniques. The study aimed to contribute to age estimation using the pulp/tooth area ratio in incisors assessed in orthopantomograms. This research studied 801 patients’ OPGs. An Ethical statement was issued by the Ethical Commission of the Health Sciences of FMDUP (14/2022). The selected individuals were European with Portuguese nationality and place of birth in Portugal. The presence of systemic and dental disorders was adopted as exclusion criteria. The teeth elected were the upper central incisors due to their favorable anatomy and because they house little environmental changes over a lifetime. Regarding teeth selection, only sound teeth were considered. The dental exclusion criteria were as follows: the presence of fillings, endodontic treatments, wear, fractures, impaction, extrusion, artifacts, developmental abnormalities, periodontal disease, peri-apical lesions, root resorption, open apex, multi roots, multi canals, pulp calcification, orthodontic treatment, moderate and severe superimposition, and rotation. The analysis and selection of OPGs have considered the quality of the image, including the resolution features and absence of magnificence, noise, or artifacts. OPGs were classified into a wide range of groups by age, from 18 to 78 years old. Four hundred sixty-six belonged to female and 335 to male patients. The mean (M) chronological age (CA) of the participants was 37.01 years old (standard deviation (SD) = 15.10 years old). The median (Mdn) was 34.0 years old (interquartile range (IQR) = 24.0 years old). The patient distribution by age group and sex can be observed below (Table ). The PTAR measurements of both upper central incisors were performed without prior knowledge of the individual CA. The OPGs were obtained in JPEG format and numbered from one to 801. Image J ® software version 1.8.0 (open-source Java ® -based image processing program developed by the National Institutes of Health and the Laboratory of Optical and Computational Instrumentation, LOCI, University of Wisconsin, USA) was used for semi-automatic area measurements. We have acquired area measurements using the “freehand selections” mode of Image J software to manually draw the pulp and tooth anatomical outlines (Fig. ). About image optimization, the most adopted tool for the edition was the inversion, and the most used adjustment tools were contrast and brightness. Smoothing and sharpening processing tools were also very useful. Then, the pixel amount of each pulp and tooth area drawn was converted into areas automatically by the software, which performs the area calculation. Data were registered in Microsoft Excel ® . Statistical analysis was performed using the Statistical Package for Social Sciences program (SPSS), version 27.0. Reproducibility and repeatability were assessed using the Cronbach alpha coefficient by evaluating the agreement of the upper right central incisor (tooth 11) measurements in 30 randomly selected digital OPGs. The same OPG was examined three times for tooth and pulp measurements by the same observer (SMM), 2 days apart between each observation and by another (IMC). Normality was tested, resorting to the Kolmogorov-Smirnov test. As seen above, descriptive analysis has been performed for the continuous variable, resorting to M, SD, maximum (Max), minimum (Min) limits, Mdn, and IQR. PTAR measurements were used for age estimation using Cameriere’s regression model : \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text{Age}=78.55-3.86\cdot\text{g}-313.45\cdot\text{RA1sup}$$\end{document} Age = 78.55 - 3.86 · g - 313.45 · RA1sup where g is the sex [0, female and 1, male], and RA1sup stands for the PTAR of the upper central incisor. The used model total variance ( R 2 ) is 0.803, and the standard estimate error (SE) is 7.03 years. As both upper central incisors were measured, we estimated age using 11(EA11) and 21(EA21). The Pearson chi-square test was used to check possible associations between categorical variables. Spearman’s rho was used to analyze possible correlations. Estimated age (EA) using Cameriere’s equation was compared with CA. Resorting to linear regression, an age estimation model for the Portuguese population was developed. Then, the population sample was divided into six age groups (≤ 29, 30–39, 40–49, 50–59, 60–69, and 70–79 years old). Using a paired-sample t -test, EA with Cameriere’s method and the developed model were compared with CA in each group. The statistical significance level was set at 5%. The Cronbach’s alpha values were 0.996 for inter-agreement and 0.991 for intra-agreement, both relatively high. The Kolmogorov-Smirnov normality test showed a skewed distribution ( p < 0.05). The M and Mdn of EA, using tooth 11, were 44.23 (SD = 7.27) and 44.45 years (IQR = 8.83), respectively. Using tooth 21, the M and Mdn of EA were 42.82 (SD = 7.73) and 43.14 years (IQR = 9.0), respectively (Table ). CA did not display a statistically significant association with the PTAR ( p = 0.423). Yet, this link was present when we divided the sample by age groups ( p < 0.001). As for the correlation between CA and EA, a moderate direct correlation was found using tooth 11 ( r = 0.679) and slightly higher with tooth 21 ( r = 0.706) (Table ). This correlation was statistically significant ( p < 0.001 for both). Wilcoxon Signed Rank test was employed to compare CA and EA within groups and the total sample (Table ). There were statistically significant differences in both cases ( p < 0.001). The Z -score showed a slightly better relationship while using tooth 21. Using linear regression, a model for estimating age was developed. We observed sex and teeth laterality as possible confounding variables, and the variable sex was excluded, as it presented a low correlation value ( r = 0.018) with CA. Conversely, a moderate negative correlation was found between CA and PTAR, using tooth 11 ( r = − 0.672) and tooth 21 ( r = − 0.687). The variables PTAR 11 and PTAR 21 were strongly correlated ( r = 0.876). Yet, as the correlation did not pose an absolute contraindication and could increase the robustness of the prediction, they were both kept. The assumptions for the model were checked, starting with the multicollinearity analysis. As mentioned, the independent variables presented a correlation with the dependent variable greater than 0.3 ( r = − 0.672 and r = − 0.687). The tolerance ( t = 0.233) and variance inflation factor (VIF = 4) were more significant than 0.1 and less than 10, respectively. Thus, the assumption of multicollinearity was not violated. The assumptions regarding outliers, normality, linearity, homoskedasticity, and independence of residuals were also verified. The model explained 49.3% ( R 2 = 0.493) of the variance of age, showing a statistical significance of the built model ( p < 0.001). The PTAR 21 variable showed the most significant contribution to the developed equation. All variables presented statistical significance ( p < 0.001) (Table ). Paired-sample t -tests showed statistically significant differences between EA means, using Cameriere’s equation (EA11 and EA21) and using our model (EA3), with CA means in all age groups (Table ). Age estimation in adults is particularly troublesome, as no developmental markers are available for these ages, and therefore, age estimation relies on senescence indicators. Yet, age-related changes and environmental factors often alter these indicators, making it virtually impossible to discriminate between older ages reasonably . Our results point out that difficulty, as the statistically significative correlation between CA and EA (regardless of the model used), is lost in ages over 50 years, and all models underestimate age in all age groups. Other authors report similar difficulties in identical age groups, although using different teeth . These results point out that PTAR, namely using central incisors, may not be suitable for age estimation over this age, and other methodologies should be used. Different results were obtained by Cameriere et al. , who successfully applied this method in individuals older than 70, suggesting that population differences may exist, and these should be considered when choosing the methodology for age estimation. Also, the selected tooth to apply the method might play an important role as most studies reporting accuracy in EA using PTAR refer to canines and lower premolars (Table ) . Another critical factor to consider is that Cameriere’s methodology was developed in apical radiographs, and forcing its use in orthopantomograms can lead to errors, as the incisors’ images are undoubtedly distorted. Yet, we have chosen to use orthopantomograms due to the possibility of selecting different methods using one radiograph alone. Regarding estimating the age of the dead, an adaptation to the corpse’s state of preservation is required. Portable apical radiographs could make obtaining the best angle for the best image easier. However, other methods, such as the biochemical techniques , Lamendin , and Gustafson methods , are well established with good results. In ages 30–49, a correlation between CA and EA was found, regardless of the model used. This is of little value, as it would be a requirement to know a person’s age before the age estimation process. Age was underestimated, and statistically significant differences between CA and EA means were determined. This was true for all age groups, suggesting that this methodology may be inadequate for age estimation in this population. Similar results were obtained by Jeevan et al. found this methodology in canines useful up to age 45. On the other hand, Anastacio et al., who also studied a Portuguese population but applied the PTAR methodology on second premolars, found the methodology unreliable in all age groups . As stated, there was an age overestimation until age 50, and from there on, an underestimation. This may happen because secondary dentine deposition is a finite process, and as time goes by and the pulp area diminishes, the quantity of secondary dentine deposited diminishes. Hence, the increase in tooth area is also lower. This means this method may also have an upper age use limit. This may differ in different populations and certainly with the used tooth, as explained in Table . Many authors justify the selection of a specific tooth for PTAR based on technical issues, such as visibility in the X-ray, lower superimposition phenomenon, and the frequent presence of the tooth (and less damaged) , among others. Yet, other considerations should be made, namely the probable age frame and the population affinity. Regarding the existence of specific population formulas, other investigators also support this claim , arguing that this approach considers the specific population correlation with secondary dentine deposition. We believe this to be true, but the choice of the tooth in which the methodology will be applied should also reflect this, as different populations for different age intervals may favor other teeth. Cameriere et al. said that PTAR using incisors could be a helpful methodology in age estimation. However, if used in other teeth, the process offers better results, namely in canines and premolars . Yet, Zaher et al. reported high levels of accuracy using upper incisors, supporting the idea that tooth choice matters. The limitations of the present study include the low representativeness of older adults, which may have caused less accuracy for the older age group’s assessment. Additionally, regression equations will always overestimate the younger age group and underestimate the older age groups. In the future, the developed model should be tested in an independent sample. Our results conclude that the upper incisors’ pulp/tooth area ratio, using orthopantomograms, overestimated age, and statistically significant differences between chronological and estimated age, are present. For those over 50, no correlation between pulp/tooth area ratio and chronological age was found, suggesting that this may be the upper limit of this technique in this population. Age estimation in adults is a complex process The pulp/tooth area ratio in incisors has been proposed as a valuable methodology for adult age estimation In orthopantomograms, pulp/tooth area ratio analyses overestimate age In orthopantomograms, pulp/tooth area ratio analysis does not work in people over 50 |
Advances and challenges in cardiology | d689e81e-1e69-4b22-b6b0-07a6cff80733 | 11321536 | Internal Medicine[mh] | |
A rare adult case of primary uterine rhabdomyosarcoma with mixed pattern: a clinicopathological & immunohistochemical study with literature review | bd3f6459-401a-46b3-9699-7ff4ee79caca | 11253370 | Anatomy[mh] | Rhabdomyosarcoma (RMS) is an aggressive malignant mesenchymal tumor of striated muscle origin that is more commonly diagnosed in children and adolescents than adults . It develops essentially in the deep soft tissue of the neck, extremities, and perineal region . According to the World health organization (WHO) classification introduced in 2020, rhabdomyosarcoma is subclassified into four major subtypes: embryonal (ERMS), alveolar (ARMS), pleomorphic (PRMS), and spindle cell/sclerosing . Primary uterine rhabdomyosarcoma can present as a heterologous differentiation in uterine carcinosarcoma or adenosarcoma or, far less commonly, arises as a pure uterine rhabdomyosarcoma . Primary pure rhabdomyosarcoma infrequently involves gynecological regions, where the embryonal subtype represents more than 75% of cases, especially in children with DICER1 syndrome, and is associated with favorable prognosis in comparison with ARMS and PRMS . ARMS and PRMS are seen nearly exclusively in adults, with PRMS typically involving post-menopausal females . Some rhabdomyosarcomas contain histologic features of multiple subtypes. In 1995, Pappo et al. reported that the presence of any alveolar element translates into a bad prognosis. The biologic basis for these mixed tumors is currently unknown, although some studies suggest that even the embryonal elements of “bad” tumors have genetic features of ARMS .Rhabdomyosarcoma with mixed embryonal and alveolar features were previously thought to be a form of alveolar RMS, but studies have shown that most lack PAX3/7::FOXO1 fusions, suggesting that such tumors are more in line with embryonal RMS. However some mixed tumors have had detectable gene fusions which clearly would be more in keeping with alveolar RMS . Owing to its rarity, there are limited data regarding frequency and clinico-pathological features of primary pure uterine rhabdomyosarcoma in publications. Therefore, the current study describes the clinicopathologic & immunohistochemical features of a new case of uterine RMS in an adult woman and also reviews the available cytological and clinicopathological findings of previously reported adult uterine RMS cases in English literature with the goal of improving recognition of this tumor outside of its classical setting. • Case: Clinical data Female patient aged 68 years presented with an abdominal mass and abnormal uterine bleeding. No specific medical or surgical history (including a history of previous radiation exposure) was reported. Imaging studies demonstrated multiple intra-luminal and intra-mural uterine masses with peritoneal deposits. The patient underwent TAH+BSO with excision of peritoneal deposits. The specimen was preserved in 10% formalin, and referred to Pathology Department Lab, Faculty of Medicine, Tanta University, Egypt. Patient's clinical data including name, age, medical and surgical history, contact information & type of operation performed were all recorded. Gross examination The specimen was registered, coded and underwent pathological analysis. Pathological aspects that were assessed included the tumor site, tumor size & extension. Meticulous sampling of the tumor was performed (one section for every 2 cm of the tumor). All submitted sections from the primary uterine tumor obtained from the received specimen were readily available for histopathological examination and further immunohistochemical studies. Formalin-fixed paraffin-embedded (FFPE) tissues were processed for light microscopic examination, and histological sections were stained using hematoxylin and eosin (H&E) stains. Paraffin blocks were then selected for immunohistochemical procedures. Histopathological examination Histopathological features which were evaluated included pattern of growth, presence of any epithelial elements, presence of other heterologous elements, cellular features, nuclear pleomorphism, mitotic activity, amount of rhabdomyoblastic cells, myometrial invasion, vascular invasion and extra-uterine extension. Immunohistochemistry Immunohistochemical studies were performed on FFPE selected blocks from the tumor. The (FFPE) blocks were sectioned (5 µm thick) on positively charged slides and were dried for 30 min at 37°C. The slides were placed in Dako PT Link unit for deparaffinization and antigen retrieval. EnVisionTM FLEX Target Retrieval Solution with a high pH was used at 97°C for 20 minutes. Immunohistochemistry was performed using Dako Autostainer Link 48. For 10 minutes, slides were immersed in Peroxidase-Blocking Reagent, incubated with primary antibodies utilized in this study (summarized in Table ). Following that, the slides were treated for 20 minutes with horseradish peroxidase polymer reagent and 10 minutes with diaminobenzidine chromogen. After that, the slides were counterstained with hematoxylin. Follow up data Clinical & follow up information were all obtained from patient medical record and by contacting the referring physician & patient family as well. Literature review A systematic review of the English-language literature since 1972 for “primary uterine rhabdomyosarcoma” in adults above 30 years of age was conducted. Female patient aged 68 years presented with an abdominal mass and abnormal uterine bleeding. No specific medical or surgical history (including a history of previous radiation exposure) was reported. Imaging studies demonstrated multiple intra-luminal and intra-mural uterine masses with peritoneal deposits. The patient underwent TAH+BSO with excision of peritoneal deposits. The specimen was preserved in 10% formalin, and referred to Pathology Department Lab, Faculty of Medicine, Tanta University, Egypt. Patient's clinical data including name, age, medical and surgical history, contact information & type of operation performed were all recorded. The specimen was registered, coded and underwent pathological analysis. Pathological aspects that were assessed included the tumor site, tumor size & extension. Meticulous sampling of the tumor was performed (one section for every 2 cm of the tumor). All submitted sections from the primary uterine tumor obtained from the received specimen were readily available for histopathological examination and further immunohistochemical studies. Formalin-fixed paraffin-embedded (FFPE) tissues were processed for light microscopic examination, and histological sections were stained using hematoxylin and eosin (H&E) stains. Paraffin blocks were then selected for immunohistochemical procedures. Histopathological features which were evaluated included pattern of growth, presence of any epithelial elements, presence of other heterologous elements, cellular features, nuclear pleomorphism, mitotic activity, amount of rhabdomyoblastic cells, myometrial invasion, vascular invasion and extra-uterine extension. Immunohistochemical studies were performed on FFPE selected blocks from the tumor. The (FFPE) blocks were sectioned (5 µm thick) on positively charged slides and were dried for 30 min at 37°C. The slides were placed in Dako PT Link unit for deparaffinization and antigen retrieval. EnVisionTM FLEX Target Retrieval Solution with a high pH was used at 97°C for 20 minutes. Immunohistochemistry was performed using Dako Autostainer Link 48. For 10 minutes, slides were immersed in Peroxidase-Blocking Reagent, incubated with primary antibodies utilized in this study (summarized in Table ). Following that, the slides were treated for 20 minutes with horseradish peroxidase polymer reagent and 10 minutes with diaminobenzidine chromogen. After that, the slides were counterstained with hematoxylin. Clinical & follow up information were all obtained from patient medical record and by contacting the referring physician & patient family as well. A systematic review of the English-language literature since 1972 for “primary uterine rhabdomyosarcoma” in adults above 30 years of age was conducted. Gross examination The uterine corpus was cut open when received, measured 18x18x15 cm, and revealed multiple pale spherical firm transmural nodules infiltrating the myometrium and encroaching the perimetrium. Meanwhile, some of these nodules were seen protruding into the uterine cavity. The largest nodule measured 12x7 cm and was centered in the myometrium. All nodules were fleshy, white yellow and homogenous (Figure a, b), yet no gross necrosis was seen. The cervical stump was received as a separate specimen measured 9x7x7 cm and showed almost total infiltration by similar nodules. Both ovaries & fallopian tubes were included with each ovary measured about 4x2x1 cm and each tube length was about 7 cm with no remarkable findings. Excised fragmented peritoneal fat measured collectively about 5x3 cm and was studded with metastatic deposits that exhibited similar gross features to the uterine ones. Microscopic examination H&E-stained sections obtained from tumor nodules demonstrated, interestingly, the tumor exhibiting mixed patterns; while the majority of malignant cells were arranged in nests with loss of cellular cohesion in the center giving alveolar pattern, and separated by fibrovascular septa, other areas demonstrating alternating hypo- and hypercellularity within myxoid background with perivascular and sub-epithelial condensation were seen as well. Alveolar areas showed primitive mesenchymal malignant cells with various stages of myogenic differentiation. The tumor cells were mix of medium and large sized, round undifferentiated cells together with differentiating rhabdomyoblastic cells showing eccentric nuclei, frequently with prominent nucleoli, and abundant polygonal eosinophilic cytoplasm with notable cross striations. Other areas were formed of primitive small and medium sized mesenchymal cells that showed lesser degree of striated muscle differentiation with frequent anaplastic cells showing large hyperchromatic nuclei with frequent mitosis. Besides, solid and densely cellular areas showing aggregates of pleomorphic cells with bizarre-looking nuclei and multinucleated tumor giant cells were seen. The tumor was diffusely infiltrating uterine wall (corpus and cervical stump), dissecting the myometrium up to serosa. Although scarce entrapped benign endometrial and endocervical glands were encountered, no malignant epithelial component was detected (the tumor was re-sectioned and thoroughly examined to ensure absence of any neoplastic epithelial element whether adenomatous or carcinomatous). Frequent lymphovascular and perineural invasion was seen together with infiltration of peritoneal fat. Figure (a-l) demonstrates different histopathological features of studied case. Immunohistochemistry Both vimentin and desmin showed diffuse heterogeneous strong positive cytoplasmic staining (Figure : a-d). Also, myogenin showed heterogeneous positive nuclear staining but of moderate-intensity with accentuation in alveolar areas and rhabdomyoblastic cells (Figure : e, f). Tumor cells showed membranous positivity for CD56 & cytoplasmic positivity for WT-1 (Figure : g-j). SMA, CD10, ER, cyclin D1, CD99, S100, and LCA were all negative. No malignant epithelial element was distinguished with pan cytokeratin or ER. OLIG2 was negative as well. Follow up data The patient received postoperative chemotherapy and radiotherapy but died because of complications of systemic metastases 3 months after surgery. Diagnosis and tumour stage The final diagnosis was primary uterine rhabdomyosarcoma with mixed pattern (embryonal and alveolar). Based on the TNM staging system for uterine sarcoma endorsed by the American Joint Committee on Cancer (AJCC) and the parallel system formulated by the International Federation of Gynecology and Obstetrics (FIGO) 2018 update , Tumour stage was pT2NxM1, Stage Group & FIGO Stage IVB. Literature review The reported cases retrieved by systematic review were summarized and tabulated in a chronological manner (Table ). The uterine corpus was cut open when received, measured 18x18x15 cm, and revealed multiple pale spherical firm transmural nodules infiltrating the myometrium and encroaching the perimetrium. Meanwhile, some of these nodules were seen protruding into the uterine cavity. The largest nodule measured 12x7 cm and was centered in the myometrium. All nodules were fleshy, white yellow and homogenous (Figure a, b), yet no gross necrosis was seen. The cervical stump was received as a separate specimen measured 9x7x7 cm and showed almost total infiltration by similar nodules. Both ovaries & fallopian tubes were included with each ovary measured about 4x2x1 cm and each tube length was about 7 cm with no remarkable findings. Excised fragmented peritoneal fat measured collectively about 5x3 cm and was studded with metastatic deposits that exhibited similar gross features to the uterine ones. H&E-stained sections obtained from tumor nodules demonstrated, interestingly, the tumor exhibiting mixed patterns; while the majority of malignant cells were arranged in nests with loss of cellular cohesion in the center giving alveolar pattern, and separated by fibrovascular septa, other areas demonstrating alternating hypo- and hypercellularity within myxoid background with perivascular and sub-epithelial condensation were seen as well. Alveolar areas showed primitive mesenchymal malignant cells with various stages of myogenic differentiation. The tumor cells were mix of medium and large sized, round undifferentiated cells together with differentiating rhabdomyoblastic cells showing eccentric nuclei, frequently with prominent nucleoli, and abundant polygonal eosinophilic cytoplasm with notable cross striations. Other areas were formed of primitive small and medium sized mesenchymal cells that showed lesser degree of striated muscle differentiation with frequent anaplastic cells showing large hyperchromatic nuclei with frequent mitosis. Besides, solid and densely cellular areas showing aggregates of pleomorphic cells with bizarre-looking nuclei and multinucleated tumor giant cells were seen. The tumor was diffusely infiltrating uterine wall (corpus and cervical stump), dissecting the myometrium up to serosa. Although scarce entrapped benign endometrial and endocervical glands were encountered, no malignant epithelial component was detected (the tumor was re-sectioned and thoroughly examined to ensure absence of any neoplastic epithelial element whether adenomatous or carcinomatous). Frequent lymphovascular and perineural invasion was seen together with infiltration of peritoneal fat. Figure (a-l) demonstrates different histopathological features of studied case. Both vimentin and desmin showed diffuse heterogeneous strong positive cytoplasmic staining (Figure : a-d). Also, myogenin showed heterogeneous positive nuclear staining but of moderate-intensity with accentuation in alveolar areas and rhabdomyoblastic cells (Figure : e, f). Tumor cells showed membranous positivity for CD56 & cytoplasmic positivity for WT-1 (Figure : g-j). SMA, CD10, ER, cyclin D1, CD99, S100, and LCA were all negative. No malignant epithelial element was distinguished with pan cytokeratin or ER. OLIG2 was negative as well. The patient received postoperative chemotherapy and radiotherapy but died because of complications of systemic metastases 3 months after surgery. The final diagnosis was primary uterine rhabdomyosarcoma with mixed pattern (embryonal and alveolar). Based on the TNM staging system for uterine sarcoma endorsed by the American Joint Committee on Cancer (AJCC) and the parallel system formulated by the International Federation of Gynecology and Obstetrics (FIGO) 2018 update , Tumour stage was pT2NxM1, Stage Group & FIGO Stage IVB. The reported cases retrieved by systematic review were summarized and tabulated in a chronological manner (Table ). The current study handled a very rare and interesting case of a primary uterine mixed embryonal and alveolar type rhabdomyosarcoma involving both uterine corpus and cervix in a 68-year old woman, which provided an opportunity to enlighten different aspects regarding the diagnosis and differential diagnosis of primary uterine RMS as well as better understanding of RMS classification and characteristics of each subtype by surveying recent related publications. The systematic review of the English-language literature that focused on primary uterine rhabdomyosarcoma in adults above 30 years of age uncovered 87 cases between 1972 and 2023. Recorded available variables, including age, RMS type, tumor size/weight, treatment methods, and follow-up are shown in Table . To our knowledge this is the broadest literature review collection of such rare cases. Mixed pattern RMS (ARMS and ERMS) constitutes a diagnostic dilemma regarding its histopathological features. Whereas some confusion may easily occur between ARMS cases that show solid areas reminiscent of ERMS and ERMS cases with dense pattern that may resemble solid ARMS, the truly histologically mixed pattern rhabdomyosarcomas are rare tumors and applied only for selected cases. These tumors exhibit separate, discrete ARMS and ERMS morphology with variable extent of each component . Originally, it was sufficient to establish the diagnosis of ARMS if any focus of alveolar morphology was identified, and tumors that exhibit discrete areas of both alveolar and embryonal histology "of any histologic pattern of ERMS” were diagnosed as ARMS . In cases of malignant mesenchymal tumor in the uterus, extensive sampling is necessary to exclude sarcomatous overgrowth in adenosarcoma or carcinosarcoma. Adenosarcoma is generally characterized by broad leaf-like or club-like projections. In the present case, extensive sampling of surgical specimen and cytokeratin immunostaining failed to reveal the presence of any neoplastic epithelial elements, leading to the adenosarcoma and carcinosarcoma diagnoses being ruled out. The tumor cells were immunohistochemically positive for vimentin, also they were positive for striated muscle markers, such as desmin & myogenin but negative for SMA. These findings were similar to those reported by others . Expressions of both desmin & myogenin are reciprocally related to the degree of cellular differentiation, thus more myogenin staining is seen in primitive-appearing cells and a decreased or absence of immunoreactivity is seen in large differentiated rhabdomyoblasts and the opposite reported for desmin . Endometrial stromal sarcoma was excluded in this case by negative immunostaining to CD10, ER, CD99, and Cyclin-D1 primary antibodies. WT-1 showed only cytoplasmic staining with absent nuclear staining, supporting the idea that tumors with this phenotype exhibit WT1 deregulation. The immunohistochemical results were in line with previous findings that WT-1 protein is not acting as a nuclear transcription factor in such tumors but instead is stabilized in the cytoplasm . CD56 showed membranous staining in tumor cells, which is a sensitive marker of poorly differentiated neuroendocrine carcinomas. However, the results highlight the lack of specificity of this antibody, especially in clinical situations where small cell carcinoma is suspected. Moreover, Bahrami et al., reported in 2008 that it may also be expressed in almost all other small round cell neoplasms . Results of CD56 expression in current case are in keeping with these prior findings. One of the important implications of findings in presented case was recognition that ARMS can display a wide immunophenotypical spectrum, and this grabbed attention to avoid misdiagnosis of this tumor as it morphologically can resembles other small round cell tumors. The histogenesis of rhabdomyosarcomatous differentiation in uterine RMS is not fully understood, but it could arise from primitive or uncommitted mesenchymal cells that undergo rhabdomyosarcomatous differentiation. An alternative theory suggests that uterine RMS represents sarcomatous overgrowth in adenosarcoma or carcinosarcoma, although this would be difficult to prove in practice . The chromosomal translocations t(2;13)(q35;q14) and t(1;13)(p36;q14) are characteristic of soft tissue alveolar rhabdomyosarcoma. Molecular classification has been proposed, dividing RMS into two basic groups: fusion-positive RMS (either PAX7::FOXO1 enriched or PAX3::FOXO1 enriched) and fusion negative RMS (which is further sub-divided into well differentiated RMS, moderately differentiated RMS, and undifferentiated sarcomas) . ERMS and PRMS are typically fusion negative. Whereas ARMS with t(2;13) & PAX3::FOXO1 translocations has a worse prognosis compared to PAX7::FOXO1 and fusion negative cases of ARMS . Recent publications reported that the remaining fraction of fusion-negative ARMS have a clinical and biological behavior similar to ERMS. The fusion status of RMS with mixed patterns is heterogeneous among different publications, but the majority of reported cases are fusion-negative . It is believed that fusion status for all cases of RMS, including RMS with mixed-pattern, should be investigated since it carries a prognostic value. Several studies have examined gene expression differences in fusion-driven RMS compared to its fusion-negative counterpart,as well as their relation to myogenin expression status, and reported that strong and diffuse expression of myogenin is closely associated with the presence of PAX3/7::FOXO1 translocations . Kaleta et al., in 2019 concluded that immunohistochemical expression of OLIG2 may function as a surrogate marker for the presence of PAX3/7::FOXO1 translocation in RMS . The current case showed no evidence of OLIG2 immunohistochemical staining and heterogeneous expression of myogenin, possibly denoting fusion negativity. One of the shortages of this study is that genetic analysis was not performed, and thus we emphasize on the importance of molecular testing for accurate categorization and better predilection of the tumor behavior. Rhabdomyosarcoma arising in the uterus has been fairly reported. In 1909, Robertsondescribed the first case of uterine rhabdomyosarcoma in English literature, where an alveolar architecture for the tumor was portrayed . Nevertheless, mixed rhabdomyosarcoma of the alveolar and embryonal types is very rare. To the best of our knowledge, besides the present case, only Gottwald et al., in 2008, reported such case. They reported that she had previous history of breast carcinoma, and interestingly, was diagnosed with both uterine RMS and Gastric GIST while receiving adjuvant hormonal therapy for breast cancer . The present case had no past medical history, yet pursued a very aggressive clinical course and died 3 months after surgery because of complications of systemic metastasis,despite receiving postoperative chemotherapy and radiotherapy. Summing up, the above-described clinical case of rhabdomyosarcoma with mixed alveolar & embryonal patterns of adult uterus is a very rare malignant tumor. Its diagnosis is based on histopathological analysis and confirmed by immunohistochemical examination. Clinical symptoms are non-specific for these cases. The rarity of this histological entity and protocol applied make the presented case worthy to shed light on. Moreover, despite comprehensive treatment, it is an aggressive tumor with poor prognosis and thus further molecular studies & research are needed to improve therapy options in adults. Ethical statement Approval for a study protocol was not required because this was a case report with literature review. The authors have obtained the patient’s written informed consent for print and electronic publication of this case report. Approval for a study protocol was not required because this was a case report with literature review. The authors have obtained the patient’s written informed consent for print and electronic publication of this case report. |
A study establishing sensitivity and accuracy of smartphone photography in ophthalmologic community outreach programs: Review of a smart eye camera | 544d8172-b943-4fb2-80cd-a4c6adf7c702 | 10418033 | Ophthalmology[mh] | Diagnostic instruments The conventional non-portable slit-lamp microscope and the SEC (SEC-i07; OUI Inc., Tokyo Japan) were both used as diagnostic instruments in this study. The SEC is a smartphone attachment medical device that fits above the light source and camera lens of the smartphone (Pharmaceuticals and Medical Devices Agency resisted Japan medical device number: 13B2X10198030101; ). The SEC irradiates a blue light at a wavelength of 488 nm, when an acrylic resin blue filter (PGZ 302K 302, Kuraray Co., Ltd., Japan) is placed above the light source of the smartphone. A convex macro lens (focal length = 40 mm, magnification = × 7) is placed above the camera to adjust the focus. The frame is manufactured from polyamide 12 on a 3D printer (Multi Jet Fusion 3D Model 4210; Hewlett-Packard Company, Palo Alto, CA, USA). The iPhone 7 (Apple Inc., Cupertino, CA, USA) was used to make the recordings. Study design It was a pilot study. A prospective non-randomized comparative analysis of inter-observer variation of SEC for anterior segment imaging was done. We enrolled Indian adult men and women who visited the cornea specialty outpatient clinic at Aravind Eye Hospital, Madurai, Tamil Nadu, India, from 01/10/2021 to 31/12/2021. Consecutive 100 patients with various corneal pathologies were examined on a conventional non portable slit lamp (Topcon SLD 701, Serial number Z162494) by a cornea consultant, and the diagnoses were recorded . On the same day, anterior segment videos of these 100 cases were documented using SEC by the same consultant . For the SEC examination, the SEC was placed 2–4 cm away from the cornea. This distance is important because the convex lens in front of the camera was designed to be in the best focus at 2–4 cm. Each video taken included at least three blinks in order to record a good image of the ocular surface. The resolution of the video was 4K, with a frame rate of 30 frames per second. Finally, the recorded videos of the 100 cases were shown to two other cornea consultants (consultant 1/C1 and consultant 2/C2) separately on a computer, and they were asked to record their diagnosis based on videos only. Patient information and clinical details were individually masked to avoid any bias prior to analysis. Statistical analyses Frequency (percentage) was used to describe summary information. Accuracy of SEC was assessed using sensitivity, specificity, PPV, and NPV. Kappa statistics was used to find the agreement between two consultants. P value < 0.05 was considered as statistically significant. All statistical analyses were done by STATA 17.0 (Texas, USA). The conventional non-portable slit-lamp microscope and the SEC (SEC-i07; OUI Inc., Tokyo Japan) were both used as diagnostic instruments in this study. The SEC is a smartphone attachment medical device that fits above the light source and camera lens of the smartphone (Pharmaceuticals and Medical Devices Agency resisted Japan medical device number: 13B2X10198030101; ). The SEC irradiates a blue light at a wavelength of 488 nm, when an acrylic resin blue filter (PGZ 302K 302, Kuraray Co., Ltd., Japan) is placed above the light source of the smartphone. A convex macro lens (focal length = 40 mm, magnification = × 7) is placed above the camera to adjust the focus. The frame is manufactured from polyamide 12 on a 3D printer (Multi Jet Fusion 3D Model 4210; Hewlett-Packard Company, Palo Alto, CA, USA). The iPhone 7 (Apple Inc., Cupertino, CA, USA) was used to make the recordings. It was a pilot study. A prospective non-randomized comparative analysis of inter-observer variation of SEC for anterior segment imaging was done. We enrolled Indian adult men and women who visited the cornea specialty outpatient clinic at Aravind Eye Hospital, Madurai, Tamil Nadu, India, from 01/10/2021 to 31/12/2021. Consecutive 100 patients with various corneal pathologies were examined on a conventional non portable slit lamp (Topcon SLD 701, Serial number Z162494) by a cornea consultant, and the diagnoses were recorded . On the same day, anterior segment videos of these 100 cases were documented using SEC by the same consultant . For the SEC examination, the SEC was placed 2–4 cm away from the cornea. This distance is important because the convex lens in front of the camera was designed to be in the best focus at 2–4 cm. Each video taken included at least three blinks in order to record a good image of the ocular surface. The resolution of the video was 4K, with a frame rate of 30 frames per second. Finally, the recorded videos of the 100 cases were shown to two other cornea consultants (consultant 1/C1 and consultant 2/C2) separately on a computer, and they were asked to record their diagnosis based on videos only. Patient information and clinical details were individually masked to avoid any bias prior to analysis. Frequency (percentage) was used to describe summary information. Accuracy of SEC was assessed using sensitivity, specificity, PPV, and NPV. Kappa statistics was used to find the agreement between two consultants. P value < 0.05 was considered as statistically significant. All statistical analyses were done by STATA 17.0 (Texas, USA). Results were calculated by analysing all the diagnoses made with slit lamp and SEC video based diagnoses made by consultant 1 and consultant 2 . A total of 100 patients were enrolled for the study. All the participants had various corneal pathologies, which were broadly classified as ocular surface disorders, corneal ulcers, corneal ectasia disorders, corneal dystrophies, corneal trauma, and corneal transplants for accurate analysis. Of these 100, 59 (59%) were male and 41 (41%) were female. Thirty-seven (37%) and 30 (30%) were diagnosed as OSD and corneal ulcer, respectively. shows the agreement between the two consultants to diagnosing by using SEC. Above 90% agreements were found in all the diagnoses, which were statistically significant ( P value < 0.001). Currently, many devices that can record anterior segment images by slit light are available. Some studies have reported the usefulness of slit-lamp microscope accessories that can attach to a smartphone camera. Chen et al . demonstrated good reproducibility of cataract grading using the iPhone camera. Dubbs et al . reported a good rust ring image on the cornea taken using a smartphone. These devices are attachable to the conventional non-portable slit-lamp microscopes but not to smartphones. Mohammadpour et al . and Sanguansak et al . demonstrated the effectiveness of videos obtained with a smartphone combined with a macro lens. Finally, three-dimensionally (3D) printed smartphone attachment was invented, which uses the smartphone’s light source and camera function. All these gadgets using the light source of the smartphone could release only diffuse white light, which was only capable of illuminating the surface of the eye. However, ophthalmologists require narrow slit light for detailed evaluation like depth of the lesion, size of the lesion, thickness of the cornea, anterior chamber depth, contents of anterior chamber, grading of cataract, etc., The SEC has a function to overcome this problem by using a combination of a thin slit and cylindrical lens, which converts the light source of the smartphone into a slit-light beam which can be focused on the desired object. It also provides a blue filter. This is the first study done for establishing sensitivity and accuracy of smartphone photography in community outreach programs in ophthalmology, especially for corneal pathologies, in Indian population. In this study, we found acceptable accuracy of the SEC in diagnosing corneal pathologies as compared to slit-lamp examination. We were able to diagnose various corneal pathologies including pterygiums, epithelial pathologies, keratitis, ulcers, corneal dystrophies, OSSN, LSCD, etc., using SEC videos. The videos of corneal staining taken with the blue filter of SEC were especially helpful in making accurate diagnosis. The SEC also has the following added advantages: It can be used for diagnosing a vast number of other ophthalmological diseases, particularly in the anterior segment of the eye (e.g., cataract, iris pathologies, etc). Also, it is small in size and portable and can be mounted easily on a smartphone camera, which reduces the cost. It is user friendly. The video recording does not require any special skills or learning curve since almost everyone can handle smartphone cameras nowadays. Hence, any health care worker can use this device. Moreover, the recorded videos can be shared immediately with other devices or they can be live-streamed for expert opinion. The data can be stored on online cloud for future references. We found the SEC by OUI Inc. to be a sensitive clinical tool to evaluate corneal pathologies with more than 90% sensitivity and a negative predictive value. We can successfully use the SEC in the community outreach programs like field visits, eye camps, teleophthalmology, and community centers, where either a clinical setup is lacking or ophthalmologists are not available. The collected data can be shared with the experts at the base hospital in real time for accurate diagnosis, based on which further course of management for the patient can be decided. Also, patients requiring further investigations or surgical interventions can be referred to base hospital without delay. This will help us reach masses and deliver better patient care in remote areas. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest. Nil. There are no conflicts of interest. |
Combining 3D printing technology with customized metal plates for the treatment of complex acetabular fractures: A retrospective study | c255fd2f-0fb9-4e17-9b4f-00bc2fef03b1 | 11828388 | Surgical Procedures, Operative[mh] | Acetabular fractures are a severe type of fracture, typically classified into simple and complex categories. Complex acetabular fractures are further subdivided based on the Letournel-Judet classification system, including posterior wall + posterior column fractures, posterior wall + transverse fractures, T-shaped fractures, anterior column + posterior hemi-transverse fractures, and both column fractures . These complex acetabular fractures typically involve various anatomical structures of the acetabulum, necessitating a more comprehensive and sophisticated treatment strategy.Treatment typically involves surgical intervention aimed at restoring the stability of the fracture and joint function. However, due to the complex anatomical structures and limited surgical approaches of acetabular fractures, traditional surgical methods may face challenges in dealing with complex acetabular fractures, potentially making it difficult to achieve precision and simplicity in surgery. Based on the findings of our previous research, 3D printing technology is highly beneficial for the treatment of posterior wall and posterior column fractures of the acetabulum . These fractures are classified as a simple type of acetabular fracture according to the Letournel-Judet classification. The primary aim of this study is to explore the therapeutic effects of 3D printing technology on this simple type of fracture. Based on these research findings, our research group has already validated the effectiveness of 3D printing technology in the treatment of simple fractures. However, there are significant differences in diagnostic and treatment strategies between simple and complex acetabular fractures. Therefore, this study further extends the research in this area. By expanding case selection, patient age, and clinical context, this study focuses on analyzing the application and effectiveness of 3D printing technology in the treatment of complex acetabular fractures classified under the Letournel-Judet system. Currently, 3D printing technology has made significant progress in orthopedic surgery, particularly in fracture treatment, joint replacement, spinal surgery, and orthopedic oncology . It has gradually become an indispensable technology in many complex surgeries. In the treatment of acetabular fractures, traditional surgical methods require the intraoperative adjustment of plate positioning and shaping based on the fracture reduction. This process is time-consuming and labor-intensive, and even after shaping, the plate may not fully conform to the anatomical structure of the acetabulum. This limitation can affect fracture reduction outcomes and the recovery of hip joint function . Currently, most studies utilize 3D printing technology to create pelvic models preoperatively, which can assist doctors in gaining a more intuitive understanding of the displacement of fractures and simulate the reduction process, thereby enhancing the precision of surgery and treatment effectiveness . However, this method still requires the use of conventional anatomical plates for surgical procedures. During surgery, if the fit between the conventional anatomical plates and the fracture site is not ideal, it may necessitate further bending of the plates, thereby adversely affecting the stable fixation of the fracture . The application of 3D printing technology and the production of customized metal plates in fracture treatment may offer a more personalized, precise, and effective treatment approach, potentially providing better treatment outcomes for patients. Although 3D printing technology has been widely applied in the treatment of various fractures, research on its use in the treatment of complex acetabular fractures remains relatively scarce, particularly in conjunction with custom metal plates. To address this research gap, this study proposes an innovative treatment strategy: the use of 3D printing technology to create precise fracture reduction models, combined with custom metal plates, to overcome the limitations of traditional treatment methods in terms of fracture reduction accuracy and treatment outcomes. Based on computer virtual surgery technology, our research group created physical models of the fractures after reduction. Doctors determined the shape, size, and optimal placement of the metal plates using these models, which were then specially designed by engineers through the integration of medicine and engineering, resulting in customized individualized metal plates. By comparing the effectiveness of 3D printing technology combined with customized metal plates to traditional surgical treatment for complex acetabular fractures, we aim to evaluate the advantages of their combined application in improving the treatment outcomes and patient recovery quality of complex acetabular fractures. The main hypothesis of this study is that the application of 3D printing technology combined with custom metal plates in the treatment of complex acetabular fractures can improve fracture reduction accuracy, optimize the surgical procedure, promote the healing process, and effectively enhance postoperative functional recovery and quality of life for patients. 2.1. Participants A retrospective analysis was conducted on patients with complex acetabular fractures admitted to the Central Hospital Affiliated to Shenyang Medical College from September 1, 2020 to May 31, 2022. Inclusion criteria are as follows: (1) Patients aged ≥18 years; (2) Complex acetabular fractures classified according to Letournel-Judet classification; (3) Fresh closed fractures requiring surgical intervention(fracture occurring within <3 weeks); (4) Availability of complete follow-up data. Exclusion criteria include: (1) Open or pathological fractures; (2) Old fractures (fracture occurring within>3 weeks); (3) Letournel-Judet classification as other types of fractures; (4) Inability to mobilize lower limbs due to reasons other than injury (e.g. neurological disorders); (5) Lost or incomplete follow-up data; (6) Poor health status or presence of other severe complications affecting postoperative rehabilitation exercises: (7) Simultaneous presence of another fracture (such as femoral neck fractures). According to different treatment regimens, 21 patients who opted for treatment with 3D printing combined with customized plates were categorized as the 3D printing group, while the remaining 21 patients who underwent traditional surgical treatment were categorized as the conventional group. Data including gender, age, BMI, mechanism of injury, and fracture classification were recorded for both groups. 2.2. Ethical approval This study has obtained approval from the Medical Ethics Committee of the Central Hospital Affiliated to Shenyang Medical College (Approval No: 2022012). All patients participating in the study were informed and consented to the use of their personal information and clinical data for research analysis. Prior to the start of the study, all patients signed a written informed consent form. The researchers provided a detailed explanation of the study’s purpose, procedures, potential risks, and the voluntary nature of participation, ensuring that patients fully understood and agreed to participate in the study. 2.3. 3D printing model High-resolution CT scans were used to accurately obtain three-dimensional structural data of the surgical area. The CT data of the patient’s fracture site was imported into the Mimics 20.0 software workstation (Materialise, Belgium) in DICOM format to reconstruct the three-dimensional model of the pelvic fracture . First, the bone region is initially segmented based on the gray value range of bone tissue. Then, manual segmentation tools are used to finely correct complex details to extract complete bone structure data, while removing the interference of surrounding soft tissues. Next, a smoothing tool is applied to eliminate model noise and optimize surface quality, ensuring the clarity and accuracy of the reconstructed model. During the 3D model analysis, the Mimics measurement tool is used to evaluate the fracture area in detail, including key parameters such as fracture displacement and angle changes. To better observe fracture characteristics, the femur and lumbar sections are removed, and the pelvic fracture model is examined through multi-angle rotation for a three-dimensional inspection. Then, the region segmentation tool was utilized to extract each bone fragment, implementing color staining to present them in different colors . Subsequently, virtual reduction of the fracture fragments was performed using the move and rotate functions . Next, the three-dimensional model of the pelvic fracture and the fracture data after virtual reduction will be exported as stereolithography (STL) format files. These files will then be imported into FlashPrint 5 software (FlashForge, China). Utilize polylactic acid (PLA) as the printing material; the standard printing temperature for PLA consumables is 215–220°C. Transmit the converted files to the 3D printer (Dreamer, China). Finally, print out physical models of the fracture, both before and after virtual reduction, at a 1:1 scale . 2.4. Designing customized metal plates To achieve stable reduction of the fracture area, detailed design of the placement, shape, and size of the customized metal plates was carried out using Unigraphics NX software (Siemens PLM Software, USA), based on the computer-aided virtual reduction of the pelvic fracture . Then, using polylactic acid (PLA) as the printing material, the physical models of the metal plates were printed using FlashPrint 5 software. Subsequently, the physical models of the metal plates were imported into Mimics 20.0 software through reverse scanning. On the computer-aided virtual reduction model of the fracture, the placement of the customized metal plates was simulated once again to confirm their positioning. Next, the fracture model underwent transparency processing to confirm the implantation angle of the screws, ensuring that they do not penetrate into the joint cavity . The measurement function was utilized to confirm the length of the screws, and the recommended length of each screw was marked beside each screw hole on the metal plate .Finally, using pure titanium TA3 as the raw material, the metal plates were processed and shaped at the processing plant to produce authentic customized metal plates . The metal plates then underwent treatments such as sandblasting, magnetic polishing, and ultrasonic cleaning before being sent to a dedicated quality inspection department for testing. After passing the tests, the metal plates were labeled and packaged. Prior to surgery, the metal plates were subjected to high-temperature and high-pressure sterilization for disinfection. 2.5. Preoperative preparation After admission, patients underwent routine examinations. Preoperatively, all patients underwent pelvic anteroposterior X-rays (Netherlands, Philips digital radiography DR system) and whole pelvic CT scans with three-dimensional reconstruction (Netherlands, Philips 256-slice spiral CT scanner, scan thickness 0.6 mm). Three senior physicians classified the fractures based on the imaging data and selected patients with complex acetabular fractures according to the Letournel-Judet classification system. Detailed records of the patient’s basic information, including age, gender, BMI, and time from injury to surgery, are kept. Preoperative hemoglobin levels are also recorded, and the number of patients with coagulation dysfunction, heart dysfunction, pulmonary dysfunction, and other organ dysfunctions are statistically analyzed. General anesthesia was administered to all patients, and on the day before surgery, they underwent procedures such as skin preparation, enema, and urinary catheterization. 2.6. Surgical procedure The patient undergoes general anesthesia in an appropriate position. The surgical approach, including the ilioinguinal approach, modified Stoppa approach, para-rectus abdominis approach, and Kocher-Langenbeck approach, is chosen by the surgeon based on the fracture pattern and the type of metal plates used. All surgeries in this study were performed by the same team of experienced surgeons. This ensures that the skill level and experience of the surgeons remained consistent, minimizing the impact of operator variability on the surgical outcomes. In the 3D printing group, doctors use Mimics software to perform three-dimensional reconstruction of the patient’s CT data. Through 3D printing technology, a life-sized model of the patient’s pelvis and fracture site is created for preoperative simulation. During the surgery, the doctor first accurately repositions the fracture fragments according to the 3D printed model and compares the alignment with the physical model to confirm the accuracy of the reduction. Subsequently, according to the preoperative personalized design plan, the custom metal plate, which has been pre-designed and processed, is fitted onto the repositioned fracture site and fixed in place with titanium alloy screws. The custom metal plate used is made of titanium alloy, which offers excellent biocompatibility and mechanical strength. In the conventional group, fracture reduction is performed based on the surgeon’s experience, relying on visual observation and palpation during the surgery. After reducing the fracture, the surgeon selects a standardized anatomical plate, which is then bent or cut according to the reduced fracture’s condition. The plate is subsequently fixed in place using standardized screws. Regardless of whether it’s the 3D printing group or the conventional group, intraoperatively, screw lengths are measured, and fluoroscopy from multiple angles is used to ensure that the screws do not penetrate into the joint cavity. Furthermore, various directions of hip joint movement are tested to observe the stability of the fracture, ensuring the success of the surgery. Drainage tubes are placed, and the incisions are closed layer by layer. Surgical approach, operative time, number of intraoperative fluoroscopy scans, intraoperative blood loss, and other data are recorded for both groups. 2.7. Postoperative management Due to the presence of implanted metal plates, the postoperative CT images exhibit significant metallic artifacts, which have compromised the accuracy of image interpretation. Therefore, we have opted to utilize X-rays to assess the quality of fracture reduction. On the first day postoperatively, both groups of patients underwent follow-up pelvic X-rays in anterior-posterior, inlet, and outlet views. Drainage tubes were removed based on the drainage volume. Sutures were removed as appropriate at 2 weeks postoperatively. Ankle pump exercises were initiated 6 hours after surgery, followed by quadriceps femoris contraction exercises 2–3 days after surgery, hip joint flexion and extension exercises at 2 weeks postoperatively, and partial weight-bearing exercises at 6 weeks postoperatively. During the follow-up period, pelvic X-ray examinations were performed monthly for the first two months postoperatively, followed by weekly examinations until the fracture healed completely. According to imaging assessments during follow-up, the fracture healing status was evaluated. X-rays revealed indistinct fracture lines with continuous callus formation traversing the fracture lines, indicating fracture union. The follow-up period was not less than 12 months. 2.8. Assessment of therapeutic efficacy Comparison of general patient characteristics between the two groups, including age, gender, BMI, mechanism of injury, fracture classification, preoperative preparation time, total hospitalization time, and average follow-up time, where preoperative preparation time is defined as the time from patient admission to the start of surgery. Comparison of surgical approaches between the two groups, with single surgical approach defined as completing the surgery using only one approach, and combined surgical approach defined as completing the surgery using two or more approaches. Comparison of surgical time, instrument operation time, intraoperative blood loss, intraoperative fluoroscopy times, and fracture healing time between the two groups. Surgical time is defined as the duration from incision to wound closure. Instrument operation time is defined as the time from adjusting the position of the metal plates to screwing all screws into the plates. Intraoperative blood loss is calculated by measuring the volume of irrigation fluid, blood in suction bottles, and blood on gauze. Intraoperative fluoroscopy times refer to the total number of fluoroscopic examinations during surgery. Fracture healing time is recorded from the first day after surgery until the patient meets the diagnostic criteria for fracture healing. Radiological assessment was performed by three experienced orthopedic surgeons, and the criteria for evaluating the quality of fracture reduction are as follows: displacement of <2mm is considered good, while displacement of ≥2mm is considered fair. At 12 months postoperatively, hip joint function was assessed based on the Harris score , with specific criteria as follows: hip joint function was considered excellent/good (Harris score ≥80) or fair/poor (Harris score <80). Complications during the follow-up period for both groups of patients were recorded, including inflammatory reactions, heterotopic ossification, infection, iatrogenic neurological symptoms, traumatic arthritis, etc. 2.9. Statistical analysis Statistical analysis was performed using SPSS 27.0 statistical software(IBM, Armonk, NY, USA). Categorical data were expressed as frequencies or percentages, and the chi-square test was used for group comparisons. For continuous data, the mean ± standard deviation was used to represent the data. Before analyzing continuous data, the Shapiro-Wilk test (used for small sample size data to test normality) was conducted to assess whether the data followed a normal distribution. For continuous variables that followed a normal distribution, independent samples t-test was used for group comparisons (suitable for comparing two independent groups where data in each group are normally distributed and pass the homogeneity of variance test). For data that did not follow a normal distribution, the Mann-Whitney U test was used for non-parametric analysis (suitable for comparing distribution differences between two independent samples, especially when data do not meet the normality or homogeneity of variance assumptions). A P-value of less than 0.05 was considered statistically significant. A retrospective analysis was conducted on patients with complex acetabular fractures admitted to the Central Hospital Affiliated to Shenyang Medical College from September 1, 2020 to May 31, 2022. Inclusion criteria are as follows: (1) Patients aged ≥18 years; (2) Complex acetabular fractures classified according to Letournel-Judet classification; (3) Fresh closed fractures requiring surgical intervention(fracture occurring within <3 weeks); (4) Availability of complete follow-up data. Exclusion criteria include: (1) Open or pathological fractures; (2) Old fractures (fracture occurring within>3 weeks); (3) Letournel-Judet classification as other types of fractures; (4) Inability to mobilize lower limbs due to reasons other than injury (e.g. neurological disorders); (5) Lost or incomplete follow-up data; (6) Poor health status or presence of other severe complications affecting postoperative rehabilitation exercises: (7) Simultaneous presence of another fracture (such as femoral neck fractures). According to different treatment regimens, 21 patients who opted for treatment with 3D printing combined with customized plates were categorized as the 3D printing group, while the remaining 21 patients who underwent traditional surgical treatment were categorized as the conventional group. Data including gender, age, BMI, mechanism of injury, and fracture classification were recorded for both groups. This study has obtained approval from the Medical Ethics Committee of the Central Hospital Affiliated to Shenyang Medical College (Approval No: 2022012). All patients participating in the study were informed and consented to the use of their personal information and clinical data for research analysis. Prior to the start of the study, all patients signed a written informed consent form. The researchers provided a detailed explanation of the study’s purpose, procedures, potential risks, and the voluntary nature of participation, ensuring that patients fully understood and agreed to participate in the study. High-resolution CT scans were used to accurately obtain three-dimensional structural data of the surgical area. The CT data of the patient’s fracture site was imported into the Mimics 20.0 software workstation (Materialise, Belgium) in DICOM format to reconstruct the three-dimensional model of the pelvic fracture . First, the bone region is initially segmented based on the gray value range of bone tissue. Then, manual segmentation tools are used to finely correct complex details to extract complete bone structure data, while removing the interference of surrounding soft tissues. Next, a smoothing tool is applied to eliminate model noise and optimize surface quality, ensuring the clarity and accuracy of the reconstructed model. During the 3D model analysis, the Mimics measurement tool is used to evaluate the fracture area in detail, including key parameters such as fracture displacement and angle changes. To better observe fracture characteristics, the femur and lumbar sections are removed, and the pelvic fracture model is examined through multi-angle rotation for a three-dimensional inspection. Then, the region segmentation tool was utilized to extract each bone fragment, implementing color staining to present them in different colors . Subsequently, virtual reduction of the fracture fragments was performed using the move and rotate functions . Next, the three-dimensional model of the pelvic fracture and the fracture data after virtual reduction will be exported as stereolithography (STL) format files. These files will then be imported into FlashPrint 5 software (FlashForge, China). Utilize polylactic acid (PLA) as the printing material; the standard printing temperature for PLA consumables is 215–220°C. Transmit the converted files to the 3D printer (Dreamer, China). Finally, print out physical models of the fracture, both before and after virtual reduction, at a 1:1 scale . To achieve stable reduction of the fracture area, detailed design of the placement, shape, and size of the customized metal plates was carried out using Unigraphics NX software (Siemens PLM Software, USA), based on the computer-aided virtual reduction of the pelvic fracture . Then, using polylactic acid (PLA) as the printing material, the physical models of the metal plates were printed using FlashPrint 5 software. Subsequently, the physical models of the metal plates were imported into Mimics 20.0 software through reverse scanning. On the computer-aided virtual reduction model of the fracture, the placement of the customized metal plates was simulated once again to confirm their positioning. Next, the fracture model underwent transparency processing to confirm the implantation angle of the screws, ensuring that they do not penetrate into the joint cavity . The measurement function was utilized to confirm the length of the screws, and the recommended length of each screw was marked beside each screw hole on the metal plate .Finally, using pure titanium TA3 as the raw material, the metal plates were processed and shaped at the processing plant to produce authentic customized metal plates . The metal plates then underwent treatments such as sandblasting, magnetic polishing, and ultrasonic cleaning before being sent to a dedicated quality inspection department for testing. After passing the tests, the metal plates were labeled and packaged. Prior to surgery, the metal plates were subjected to high-temperature and high-pressure sterilization for disinfection. After admission, patients underwent routine examinations. Preoperatively, all patients underwent pelvic anteroposterior X-rays (Netherlands, Philips digital radiography DR system) and whole pelvic CT scans with three-dimensional reconstruction (Netherlands, Philips 256-slice spiral CT scanner, scan thickness 0.6 mm). Three senior physicians classified the fractures based on the imaging data and selected patients with complex acetabular fractures according to the Letournel-Judet classification system. Detailed records of the patient’s basic information, including age, gender, BMI, and time from injury to surgery, are kept. Preoperative hemoglobin levels are also recorded, and the number of patients with coagulation dysfunction, heart dysfunction, pulmonary dysfunction, and other organ dysfunctions are statistically analyzed. General anesthesia was administered to all patients, and on the day before surgery, they underwent procedures such as skin preparation, enema, and urinary catheterization. The patient undergoes general anesthesia in an appropriate position. The surgical approach, including the ilioinguinal approach, modified Stoppa approach, para-rectus abdominis approach, and Kocher-Langenbeck approach, is chosen by the surgeon based on the fracture pattern and the type of metal plates used. All surgeries in this study were performed by the same team of experienced surgeons. This ensures that the skill level and experience of the surgeons remained consistent, minimizing the impact of operator variability on the surgical outcomes. In the 3D printing group, doctors use Mimics software to perform three-dimensional reconstruction of the patient’s CT data. Through 3D printing technology, a life-sized model of the patient’s pelvis and fracture site is created for preoperative simulation. During the surgery, the doctor first accurately repositions the fracture fragments according to the 3D printed model and compares the alignment with the physical model to confirm the accuracy of the reduction. Subsequently, according to the preoperative personalized design plan, the custom metal plate, which has been pre-designed and processed, is fitted onto the repositioned fracture site and fixed in place with titanium alloy screws. The custom metal plate used is made of titanium alloy, which offers excellent biocompatibility and mechanical strength. In the conventional group, fracture reduction is performed based on the surgeon’s experience, relying on visual observation and palpation during the surgery. After reducing the fracture, the surgeon selects a standardized anatomical plate, which is then bent or cut according to the reduced fracture’s condition. The plate is subsequently fixed in place using standardized screws. Regardless of whether it’s the 3D printing group or the conventional group, intraoperatively, screw lengths are measured, and fluoroscopy from multiple angles is used to ensure that the screws do not penetrate into the joint cavity. Furthermore, various directions of hip joint movement are tested to observe the stability of the fracture, ensuring the success of the surgery. Drainage tubes are placed, and the incisions are closed layer by layer. Surgical approach, operative time, number of intraoperative fluoroscopy scans, intraoperative blood loss, and other data are recorded for both groups. Due to the presence of implanted metal plates, the postoperative CT images exhibit significant metallic artifacts, which have compromised the accuracy of image interpretation. Therefore, we have opted to utilize X-rays to assess the quality of fracture reduction. On the first day postoperatively, both groups of patients underwent follow-up pelvic X-rays in anterior-posterior, inlet, and outlet views. Drainage tubes were removed based on the drainage volume. Sutures were removed as appropriate at 2 weeks postoperatively. Ankle pump exercises were initiated 6 hours after surgery, followed by quadriceps femoris contraction exercises 2–3 days after surgery, hip joint flexion and extension exercises at 2 weeks postoperatively, and partial weight-bearing exercises at 6 weeks postoperatively. During the follow-up period, pelvic X-ray examinations were performed monthly for the first two months postoperatively, followed by weekly examinations until the fracture healed completely. According to imaging assessments during follow-up, the fracture healing status was evaluated. X-rays revealed indistinct fracture lines with continuous callus formation traversing the fracture lines, indicating fracture union. The follow-up period was not less than 12 months. Comparison of general patient characteristics between the two groups, including age, gender, BMI, mechanism of injury, fracture classification, preoperative preparation time, total hospitalization time, and average follow-up time, where preoperative preparation time is defined as the time from patient admission to the start of surgery. Comparison of surgical approaches between the two groups, with single surgical approach defined as completing the surgery using only one approach, and combined surgical approach defined as completing the surgery using two or more approaches. Comparison of surgical time, instrument operation time, intraoperative blood loss, intraoperative fluoroscopy times, and fracture healing time between the two groups. Surgical time is defined as the duration from incision to wound closure. Instrument operation time is defined as the time from adjusting the position of the metal plates to screwing all screws into the plates. Intraoperative blood loss is calculated by measuring the volume of irrigation fluid, blood in suction bottles, and blood on gauze. Intraoperative fluoroscopy times refer to the total number of fluoroscopic examinations during surgery. Fracture healing time is recorded from the first day after surgery until the patient meets the diagnostic criteria for fracture healing. Radiological assessment was performed by three experienced orthopedic surgeons, and the criteria for evaluating the quality of fracture reduction are as follows: displacement of <2mm is considered good, while displacement of ≥2mm is considered fair. At 12 months postoperatively, hip joint function was assessed based on the Harris score , with specific criteria as follows: hip joint function was considered excellent/good (Harris score ≥80) or fair/poor (Harris score <80). Complications during the follow-up period for both groups of patients were recorded, including inflammatory reactions, heterotopic ossification, infection, iatrogenic neurological symptoms, traumatic arthritis, etc. Statistical analysis was performed using SPSS 27.0 statistical software(IBM, Armonk, NY, USA). Categorical data were expressed as frequencies or percentages, and the chi-square test was used for group comparisons. For continuous data, the mean ± standard deviation was used to represent the data. Before analyzing continuous data, the Shapiro-Wilk test (used for small sample size data to test normality) was conducted to assess whether the data followed a normal distribution. For continuous variables that followed a normal distribution, independent samples t-test was used for group comparisons (suitable for comparing two independent groups where data in each group are normally distributed and pass the homogeneity of variance test). For data that did not follow a normal distribution, the Mann-Whitney U test was used for non-parametric analysis (suitable for comparing distribution differences between two independent samples, especially when data do not meet the normality or homogeneity of variance assumptions). A P-value of less than 0.05 was considered statistically significant. 3.1. Baseline data A total of 42 patients were included in this study, with 21 patients in both the conventional group and the 3D printing group. There were no statistically significant differences between the two groups in terms of patient gender, age, BMI, mechanism of injury, and type of fracture (P > 0.05). The preoperative preparation time and total length of hospital stay were longer in the 3D printing group compared to the conventional group, but the differences were not statistically significant (P > 0.05). The average follow-up time for both groups of patients was at least 12 months . 3.2. Preoperative examination results The average hemoglobin level in the 3D printing group was (88.4±14.4) g/L, and the average hemoglobin level in the traditional group was (87.5±15.2) g/L. The difference was not statistically significant (P = 0.84). In the comparison of coagulation function, cardiac function, pulmonary function, and abnormalities in other organ functions, there were no statistically significant differences between the two groups (P = 1.0; P = 0.70; P = 0.50; P = 0.47, respectively) . 3.3. Surgical outcomes The average time for 3D printing group to produce physical models of fractures was 16.57±3.16 hours, while the average time for custom-made metal plates was 35.52±5.11 hours. The proportion of surgeries completed through a single surgical approach was higher in the 3D printing group compared to the conventional group, and the difference was statistically significant (P = 0.01). The average surgical time for patients in the 3D printing group was 124.76±12.89 minutes, which was less than the conventional group’s 174.05 ± 12.51 minutes. The instrument operation time in the 3D printing group was 44.57±5.32 minutes, less than the conventional group’s 62.9±7.47 minutes. The intraoperative blood loss in the 3D printing group was 337.38±51.95 ml, less than the conventional group’s 545.24 ± 74.39 ml. The number of fluoroscopy times during surgery was 8.25±1.18 times in the 3D printing group, less than the conventional group’s 10.52±1.6 times, and all differences were statistically significant (P < 0.001) . 3.4. Postoperative follow-up data The healing time of fractures in the 3D printing group (13.95 ± 1.07 weeks) was slightly longer than that in the conventional group (13.81 ± 1.17 weeks), but the difference was not statistically significant (P = 0.14). According to X-ray evaluation of fracture reduction quality on the first postoperative day and Harris evaluation criteria at 12 months postoperatively, the fracture reduction quality and hip joint function in the 3D printing group were significantly better than those in the conventional method group (good fracture reduction rate: 95.24% vs. 61.9%; excellent/good hip joint function rate: 90.48% vs. 57.14%), and all differences were statistically significant (P = 0.02; P = 0.01).During the postoperative follow-up, there were 2 cases of traumatic arthritis, 2 cases of infection, 1 case of heterotopic ossification, and 1 case of inflammatory reaction in the conventional group, totaling 6 cases. In the 3D printing group, 1 patient developed an inflammatory reaction, and 1 patient developed heterotopic ossification 2 months postoperatively, totaling 2 cases.The number of postoperative complications in the 3D printing group was fewer than that in the conventional group, but the difference was not statistically significant (P = 0.24) .The images of the patient in the 3D printing group are shown in Figs – . A total of 42 patients were included in this study, with 21 patients in both the conventional group and the 3D printing group. There were no statistically significant differences between the two groups in terms of patient gender, age, BMI, mechanism of injury, and type of fracture (P > 0.05). The preoperative preparation time and total length of hospital stay were longer in the 3D printing group compared to the conventional group, but the differences were not statistically significant (P > 0.05). The average follow-up time for both groups of patients was at least 12 months . The average hemoglobin level in the 3D printing group was (88.4±14.4) g/L, and the average hemoglobin level in the traditional group was (87.5±15.2) g/L. The difference was not statistically significant (P = 0.84). In the comparison of coagulation function, cardiac function, pulmonary function, and abnormalities in other organ functions, there were no statistically significant differences between the two groups (P = 1.0; P = 0.70; P = 0.50; P = 0.47, respectively) . The average time for 3D printing group to produce physical models of fractures was 16.57±3.16 hours, while the average time for custom-made metal plates was 35.52±5.11 hours. The proportion of surgeries completed through a single surgical approach was higher in the 3D printing group compared to the conventional group, and the difference was statistically significant (P = 0.01). The average surgical time for patients in the 3D printing group was 124.76±12.89 minutes, which was less than the conventional group’s 174.05 ± 12.51 minutes. The instrument operation time in the 3D printing group was 44.57±5.32 minutes, less than the conventional group’s 62.9±7.47 minutes. The intraoperative blood loss in the 3D printing group was 337.38±51.95 ml, less than the conventional group’s 545.24 ± 74.39 ml. The number of fluoroscopy times during surgery was 8.25±1.18 times in the 3D printing group, less than the conventional group’s 10.52±1.6 times, and all differences were statistically significant (P < 0.001) . The healing time of fractures in the 3D printing group (13.95 ± 1.07 weeks) was slightly longer than that in the conventional group (13.81 ± 1.17 weeks), but the difference was not statistically significant (P = 0.14). According to X-ray evaluation of fracture reduction quality on the first postoperative day and Harris evaluation criteria at 12 months postoperatively, the fracture reduction quality and hip joint function in the 3D printing group were significantly better than those in the conventional method group (good fracture reduction rate: 95.24% vs. 61.9%; excellent/good hip joint function rate: 90.48% vs. 57.14%), and all differences were statistically significant (P = 0.02; P = 0.01).During the postoperative follow-up, there were 2 cases of traumatic arthritis, 2 cases of infection, 1 case of heterotopic ossification, and 1 case of inflammatory reaction in the conventional group, totaling 6 cases. In the 3D printing group, 1 patient developed an inflammatory reaction, and 1 patient developed heterotopic ossification 2 months postoperatively, totaling 2 cases.The number of postoperative complications in the 3D printing group was fewer than that in the conventional group, but the difference was not statistically significant (P = 0.24) .The images of the patient in the 3D printing group are shown in Figs – . In recent years, computer simulation surgery techniques and 3D printing technology have been applied in many complex fracture surgeries, allowing surgeons to achieve more reliable fracture fixation tailored to the specific conditions of the patient.Ansari et al. compared two surgical approaches, traditional surgery and the use of pre-bent metal plates with 3D printing technology, in the treatment of complex acetabular fractures. The results showed that 3D printing facilitated a better understanding of the anatomical structure of acetabular fractures, leading to reduced surgical time, intraoperative blood loss, and the number of intraoperative fluoroscopy exposures. Hsu et al. achieved improved outcomes in terms of surgical time and effectiveness for treating acetabular fractures by employing preoperative computer-assisted virtual reduction and intraoperative combined use of 3D-printed models and pre-bent metal plates. Compared to other studies that apply 3D printing technology for the treatment of acetabular fractures, the innovation of this research lies in the combination of 3D printing with personalized custom plates, providing a precise treatment plan for complex acetabular fractures. Many 3D printing studies primarily focus on constructing virtual models of fractures to assist with preoperative planning and design, but few studies have directly combined 3D printing technology with custom plates for clinical applications. However, for complex acetabular fractures, due to the intricate nature of the fracture site and variations between individuals, even with the use of pre-bent metal plates based on 3D-printed models, achieving an ideal fit between the plate and the acetabular anatomy may be challenging during surgery. Further bending of the plates and fixation may be required intraoperatively. Our research team utilized 3D printing technology in conjunction with customized plates for the treatment of complex acetabular fractures. We found no statistically significant differences in preoperative preparation time and total hospital stay between the 3D printing group and the conventional group (P > 0.05). This indicates that preoperative printing of 3D models and customization of plates did not lead to a prolongation of the corresponding time, which is consistent with the findings of Hung et al. . The 3D printing group demonstrated significantly lower average surgical times compared to the traditional group (P<0.001). This difference may be attributed to the personalized surgical planning provided by the 3D printing group, along with advantages such as preoperative simulation of the surgical procedure, thereby reducing unnecessary steps during surgery. This also resulted in a significant reduction in intraoperative fluoroscopy exposures for the 3D printing group (P<0.001).In addition, compared to the traditional group, the use of individualized custom plates for fracture fixation intraoperatively in the 3D printing group reduced the time spent on bending and shaping the plates, thereby significantly decreasing the instrument manipulation time (P<0.001). It is worth noting that, due to the individualized customization of plates in the 3D printing group based on the type of fracture, some customized plates may have different shapes or larger volumes. These factors may to some extent increase the surgical time and instrument manipulation time. However, overall, the 3D printing group still demonstrated shorter average surgical and instrument operation times. Utilizing 3D printing technology, surgeons gain a deeper understanding of the specific patterns of complex acetabular fractures. Consequently, they can precisely select a single, less extensive, or limited invasive surgical exposure method. This approach maximally preserves surrounding tissues and enables effective fracture fixation. As a result, the 3D printing group exhibited significantly lower proportions of single surgical approaches and intraoperative blood loss compared to the traditional group (P = 0.01; P<0.001). The aforementioned results are consistent with many studies utilizing pre-bent metal plates through 3D printing technology for acetabular fracture surgeries . Therefore, employing 3D printing technology in conjunction with customized plates for treating complex acetabular fractures can significantly and effectively reduce surgical complexity, shorten surgical duration, and decrease intraoperative blood loss. The 3D printing group also demonstrated superior fracture reduction quality on the first postoperative day and better hip joint functional scores at the 12-month follow-up compared to the conventional group (P = 0.02; P = 0.01). In the 3D printing group, the proportions of fractures with good reduction scores and hip joint functional scores reaching excellent or good levels were 95.24% and 90.48%, respectively, which were higher than those in the conventional group, which were 61.9% and 57.14%, respectively. These study results demonstrate that combining 3D printing models with customized plates not only effectively improves surgical efficiency and reduces surgical complexity but also significantly enhances the quality of fracture reduction and improves the postoperative recovery of hip joint function. Additionally, there were no significant differences between the two groups in terms of fracture healing time and overall complication rates (P = 0.14; P = 0.24).This result has significant implications for clinical practice: first, shortening the surgical time and reducing intraoperative blood loss are crucial for the patient’s surgical tolerance and postoperative recovery; second, the improvement in fracture reduction quality and the restoration of hip joint function have a significant positive impact on the patient’s long-term quality of life. It should be noted that these studies have not yet been widely applied in clinical practice. Therefore, although these preliminary research results confirm the potential advantages of personalized customized plates in the treatment of acetabular fractures, further research and clinical validation are needed to determine their effectiveness and feasibility in actual clinical practice. However, the combination of 3D printing technology with personalized customized plates also has some drawbacks. Firstly, both 3D models and customized plates incur higher costs, with specific prices depending on factors such as materials used and printing size . Secondly, 3D printing technology cannot accurately reflect the surrounding soft tissues, blood vessels, nerves, etc., of the fracture site . Thirdly, although 3D printing is a rapid prototyping technology, it still requires a considerable amount of time to complete. A limitation of this study is that complex acetabular fractures are relatively rare compared to other fractures, resulting in a small sample size. Therefore, larger studies are needed in the future to confirm the effectiveness of combining 3D printing technology with customized plates in the surgical treatment of complex acetabular fractures. With the continuous advancement of 3D printing technology, personalized treatment plans combining custom plates are expected to become the standard approach for treating complex acetabular fractures. Therefore, future research should not only focus on the application of this technology in complex acetabular fractures but also explore its potential in other types of complex fractures. Through these studies, the goal is to provide more precise and safer treatment options for clinical practice, promoting the development of personalized medicine. The limitations of this study are mainly reflected in the following aspects. First, as a retrospective design, the data may be affected by patient selection bias and inconsistent treatment standards, which could impact the accuracy of the results. Second, due to the relatively rare occurrence of complex acetabular fractures, the sample size in this study is small, limiting the statistical power of the results and the ability to generalize them to broader clinical practice. To further validate the effectiveness of 3D printing technology combined with custom plates in the surgical treatment of complex acetabular fractures, future research should expand the sample size, particularly through multicenter, large-scale prospective clinical trials to ensure the comprehensiveness and reliability of the data. The application of 3D printing technology combined with personalized customized metal plates in the treatment of complex acetabular fractures can shorten surgical and instrument operation time, reduce intraoperative blood loss, and improve the quality of fracture reduction and recovery of hip joint function. In future clinical practice, especially in the treatment of complex acetabular fractures, 3D printing technology could be considered for preoperative planning. It allows for the precise design of metal plates based on the patient’s individual anatomical structure, enabling personalized treatment and optimizing surgical outcomes. S1 Data (XLSX) |
Machine learning-based plasma metabolomics for improved cirrhosis risk stratification | 75b7edf5-d51f-41a1-ba2d-13f5ae8c7378 | 11800577 | Biochemistry[mh] | Cirrhosis is the 11th most common cause of death worldwide, with more than 1 million deaths annually . The primary causes of cirrhosis are chronic liver diseases (CLD), including viral infections, alcoholic liver disease (ALD), metabolic-associated fatty liver disease (MAFLD, also known as non-alcoholic fatty liver disease [NAFLD]), autoimmune liver diseases, chronic cholestasis, and drug-related or toxic liver injuries. The progression of CLD follows the course from liver fibrosis to cirrhosis. While most patients with cirrhosis have a single underlying cause, a minority have multiple contributing factors . CLD often culminates in cirrhosis at the final stage. CLD progresses to irreversible cirrhosis through metabolic dysregulation, leading to fat accumulation, inflammation, and fibrosis, which gradually damages the structure and function . The progression of cirrhosis takes several years, and owing to the heterogeneity among patients with CLD, predicting the advancement of cirrhosis and its complications remains challenging . Current research primarily focuses on the diagnosis of cirrhosis, with few studies addressing the risk stratification for cirrhosis in patients with CLD. Previous studies have explored risk scores for predicting cirrhosis in such patients, but these have often been limited by the type of liver disease studied or by small sample sizes . Although liver biopsy remains the gold standard for diagnosing and staging liver fibrosis, its invasive nature and associated risks limit its suitability for the longitudinal monitoring of fibrosis progression or assessment of treatment response . Serum markers such as the aspartate aminotransferase-to-platelet ratio index (APRI) and fibrosis-4 (FIB-4) index are commonly used to predict cirrhosis, but their sensitivity and specificity remain suboptimal . Managing CLD is challenging and more reliable methods are needed to assess the risk of cirrhosis progression in these patients. Early identification of the risk of progression to cirrhosis in patients with CLD allows for increased monitoring frequency, implementation of preventive measures, and reduction in treatment burden. The liver is a central regulator of metabolism. Increases in free fatty acids, hyperglycemia, lipotoxicity, and significant alterations in protein synthesis following cell activation can disrupt the liver structure, promoting the development of liver fibrosis and, eventually, cirrhosis . Persistent metabolic dysfunction in the liver can lead to chronic mitochondrial impairment and CLD, ultimately progressing to end-stage liver disease . The rapidly advancing field of metabolomics has become a powerful tool in clinical research, facilitating the identification of biomarkers, phenotyping, disease staging, and uncovering the underlying mechanisms . Metabolites are known to be associated with the progression of CLD to cirrhosis . Proton nuclear magnetic resonance (1 H-NMR) spectroscopy-based metabolomics of serum samples is a quantitative method for studying multi-parameter metabolic changes and responses, and has been widely applied in liver disease research . However, the potential of serum metabolomics to predict the progression of CLD to cirrhosis has yet to be systematically evaluated and benchmarked. The UK Biobank (UKB) is a large prospective cohort study that recruited more than 500,000 participants between 2006 and 2010 . The UKB participants underwent comprehensive phenotypic characterization and their health records were subsequently recorded. Large-scale metabolomic profiling was conducted on approximately 120,000 baseline serum samples using 1 H-NMR, covering 168 individual metabolites. In this study, we utilized the resources of the UKB and integrated metabolomic data with machine learning to enhance risk stratification for cirrhosis in CLD. Study design The participants underwent extensive baseline assessments, which included the collection of clinical information and biological samples, with regular updates and follow-up . Blood, urine, and saliva samples were collected for analysis at baseline. Metabolic biomarkers were obtained from 118,019 baseline EDTA non-fasting venous serum samples using the high-throughput nuclear magnetic resonance (NMR) metabolomics platform developed by Nightingale Health Ltd., between June 2019 and April 2020. Detailed information can be found in UKB research documentation ( https://biobank.ctsu.ox.ac.uk/ukb/ukb/docs/nmrm_companion_doc ). After controlling for quality and batch effects, 249 metabolic biomarkers were available (168 original measurements and 81% ratios). We selected 168 primary metabolites based on their direct concentrations and biological relevance, as these biomarkers are widely recognized for their utility in predicting disease risk and their strong association with clinical outcomes . The remaining 81 indicators, including metabolite ratios, percentages of individual metabolites within their total category, and measures of unsaturation, were excluded to maintain focus on the absolute concentrations of specific metabolites . We focused on 168 measurements representing the concentrations of various metabolites that were categorized into 17 groups. These included: Amino Acids ( n = 10), Apolipoproteins ( n = 2), Cholesterol ( n = 21), Cholesteryl Esters ( n = 18), Fatty Acids ( n = 9), Fluid Balance ( n = 2), Free Cholesterol ( n = 18), Glycolysis-Related Metabolites ( n = 4), Inflammation ( n = 1), Ketone Bodies ( n = 4), Lipoprotein Particle Concentrations ( n = 4), Lipoprotein Particle Sizes ( n = 3), Lipoprotein Particles ( n = 14), Other Lipids ( n = 4), Phospholipids ( n = 18), Total Lipids ( n = 18), and Triglycerides ( n = 18). In this study, we included complete data from all UKB participants who had 168 original serum metabolite measurements at their initial assessment center visit. We further excluded individuals with incomplete parameters, such as missing data on age or liver enzymes, those with a baseline diagnosis of cirrhosis, and those with significantly abnormal metabolomic measurements (defined as values exceeding 5 standard deviations from the mean). Individuals with missing data on key parameters, such as age and liver enzymes, were excluded to ensure the integrity and accuracy of the analysis. Only complete datasets were used in the final model to avoid potential biases and ensure robust, reliable results. To account for the influence of lipid-lowering medications on the metabolomic profiles, participants taking these medications were also excluded. This left 2,738 eligible patients with CLD for analysis, allowing us to explore the association between serum metabolomics and the progression of CLD. The cohort was then divided into derivation (80%) and validation (20%) cohorts. This approach was chosen to preserve the heterogeneity within the sample, ensuring the model’s generalizability to a broader population. Given the relatively limited sample size, stratified sampling could have led to insufficient sample sizes in certain subgroups, potentially compromising the model’s stability. Furthermore, random sampling helps simulate a more diverse, real-world population while minimizing potential biases that could arise from an overly complex stratification process. We used the derivation subset to train the Elastic Net (EN) models to predict cirrhosis risk, which was subsequently validated in the validation subset (Fig. ). Definition of CLD and cirrhosis We defined the starting point as CLD and the endpoint as cirrhosis , both determined through clinical diagnoses obtained from electronic hospital health records or in cases of death or surgery related to the disease. Individuals diagnosed with liver fibrosis or cirrhosis at the baseline were excluded from the study. Supplementary Table provides detailed disease codes and definitions. Detailed definitions of the diseases and predictors used in this study can be found in Supplementary Table . Cirrhosis risk models and predictor extraction We extracted cirrhosis-related predictors from the UKB dataset . Supplementary Table provides detailed descriptions of all relevant data fields, diseases, and associated information. Independent predictors of liver fibrosis severity include age, male sex, obesity, hypertension, diabetes, elevated alanine aminotransferase (ALT), elevated aspartate aminotransferase (AST), reduced platelet count (PLT), decreased albumin (Alb), and the presence of fatty liver and hepatic steatosis on ultrasound . Upon enrollment, we first tested the association between each metabolite and cirrhosis events and identified a series of significantly correlated metabolites. For risk prediction, we incorporated various factors into the risk assessments, including sociodemographic factors (age, sex), patient history (smoking, alcohol consumption, sleep patterns), physical measurements (body mass index [BMI], systolic blood pressure, blood pressure, blood glucose), and clinical chemistry markers (liver enzymes, albumin, and additional relevant metrics). Potential confounders, including alcohol consumption, medication use, and dietary habits, were accounted for in the analysis framework. Alcohol consumption was adjusted using self-reported data on drinking frequency and intensity from the UK Biobank questionnaire. Medication use was addressed by excluding individuals taking lipid-lowering drugs with known effects on metabolic profiles. Although detailed dietary information was not available, the metabolites included in the analysis indirectly reflect nutritional status and dietary patterns. Systolic blood pressure was recorded twice, and a lower reading was used. If there was an error in the automated measurements, manual readings were recorded. We utilized a total of five models, with the base model being the “metabolomics " model, which focused on the specificity of metabolomics. To predict cirrhosis risk, we employed the FIB-4 and APRI models, both of which are widely used for assessing the risk of cirrhosis development in patients with CLD. The FIB-4 model incorporates four parameters: age, PLT, ALT, and AST levels. The APRI model is based on the ALT to AST ratio and platelet counts. We further developed APRI + metabolomics and FIB-4 + metabolomics models by integrating metabolomics data with commonly used APRI and FIB-4 models to assess whether adding metabolomics data could improve model performance. Survival analysis of individual metabolite associations All metabolites were log-normalized and standardized to a mean of 0 and standard deviation of 1 to reduce skewness, ensure comparability across metabolites, and eliminate biases introduced by differences in their original scales, thereby enhancing the robustness and interpretability of the model. The Cox-PH model was used to assess the risk of cirrhosis associated with each metabolite. The model was adjusted for age, sex, and complete set of CLD risk factors. The Benjamini-Hochberg method was applied to correct for multiple comparisons of P -values. Elastic net model development and evaluation To identify the most predictive metabolites, we applied a Cox-PH model with EN regularization to the training set, optimizing model discrimination using 10-fold cross-validation with performance assessed by Harrell’s C-index. To identify the most predictive metabolites, we applied a Cox proportional hazards model with EN regularization to the training set. EN regularization combines the benefits of both L1 and L2 penalties, balancing sparsity and model complexity by adjusting the α parameter (0 ≤ α ≤ 1), which controls the weight ratio between L1 (lasso) and L2 (ridge) regularization. The L1 component promotes variable selection, while the L2 component enhances model stability. We optimized the model’s performance using 10-fold cross-validation to determine the optimal α and λ (regularization strength) parameters, maximizing the Cox model’s discriminatory power (e.g., Harrell’s C-index) and minimizing overfitting. The predictive accuracy of the final model was evaluated in the validation set using metrics such as Harrell’s C-index, sensitivity, Youden’s index specificity, and net reclassification improvement (NRI). Additionally, we performed receiver operating characteristic (ROC) curve and decision curve analyses, stratifying individuals into quintiles based on the predicted cirrhosis risk. The performance of the model was further validated through calibration and network visualization. We calculated and visualized the Spearman’s correlation of the model features using the corrplot package. Network visualization followed the standard weighted gene co-expression network analysis (WGCNA) workflow, where metabolite correlations were calculated and converted into an adjacency matrix with a soft threshold of β = 30. The resulting topological overlap matrix was processed using a hard threshold (threshold = 0.2) to construct an unweighted network graph. Metabolite nodes included in the final APRI + metabolomics model are highlighted with saturated colors, and node sizes were adjusted according to the degree of connectivity. The entire network was generated and visualized using WGCNA and igraph packages. Identification of key metabolites and pathway enrichment analysis Key metabolites were identified from the model using EN Cox regression. Metabolites with non-zero coefficients in the final model were considered critical, and their importance was ranked based on the magnitude of their coefficients. To further elucidate the biological pathways these metabolites are involved in, we performed pathway enrichment analysis using the WebGestalt tool ( https://www.webgestalt.org ). The key metabolites were mapped to their corresponding pathways using the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. Significance, software, and data availability All analyses were performed using R (v.4.3.3) software. Statistical significance was controlled using the Benjamini-Hochberg method, with significance defined as an adjusted P -value of less than 0.05. These data are available on the UKB website. The participants underwent extensive baseline assessments, which included the collection of clinical information and biological samples, with regular updates and follow-up . Blood, urine, and saliva samples were collected for analysis at baseline. Metabolic biomarkers were obtained from 118,019 baseline EDTA non-fasting venous serum samples using the high-throughput nuclear magnetic resonance (NMR) metabolomics platform developed by Nightingale Health Ltd., between June 2019 and April 2020. Detailed information can be found in UKB research documentation ( https://biobank.ctsu.ox.ac.uk/ukb/ukb/docs/nmrm_companion_doc ). After controlling for quality and batch effects, 249 metabolic biomarkers were available (168 original measurements and 81% ratios). We selected 168 primary metabolites based on their direct concentrations and biological relevance, as these biomarkers are widely recognized for their utility in predicting disease risk and their strong association with clinical outcomes . The remaining 81 indicators, including metabolite ratios, percentages of individual metabolites within their total category, and measures of unsaturation, were excluded to maintain focus on the absolute concentrations of specific metabolites . We focused on 168 measurements representing the concentrations of various metabolites that were categorized into 17 groups. These included: Amino Acids ( n = 10), Apolipoproteins ( n = 2), Cholesterol ( n = 21), Cholesteryl Esters ( n = 18), Fatty Acids ( n = 9), Fluid Balance ( n = 2), Free Cholesterol ( n = 18), Glycolysis-Related Metabolites ( n = 4), Inflammation ( n = 1), Ketone Bodies ( n = 4), Lipoprotein Particle Concentrations ( n = 4), Lipoprotein Particle Sizes ( n = 3), Lipoprotein Particles ( n = 14), Other Lipids ( n = 4), Phospholipids ( n = 18), Total Lipids ( n = 18), and Triglycerides ( n = 18). In this study, we included complete data from all UKB participants who had 168 original serum metabolite measurements at their initial assessment center visit. We further excluded individuals with incomplete parameters, such as missing data on age or liver enzymes, those with a baseline diagnosis of cirrhosis, and those with significantly abnormal metabolomic measurements (defined as values exceeding 5 standard deviations from the mean). Individuals with missing data on key parameters, such as age and liver enzymes, were excluded to ensure the integrity and accuracy of the analysis. Only complete datasets were used in the final model to avoid potential biases and ensure robust, reliable results. To account for the influence of lipid-lowering medications on the metabolomic profiles, participants taking these medications were also excluded. This left 2,738 eligible patients with CLD for analysis, allowing us to explore the association between serum metabolomics and the progression of CLD. The cohort was then divided into derivation (80%) and validation (20%) cohorts. This approach was chosen to preserve the heterogeneity within the sample, ensuring the model’s generalizability to a broader population. Given the relatively limited sample size, stratified sampling could have led to insufficient sample sizes in certain subgroups, potentially compromising the model’s stability. Furthermore, random sampling helps simulate a more diverse, real-world population while minimizing potential biases that could arise from an overly complex stratification process. We used the derivation subset to train the Elastic Net (EN) models to predict cirrhosis risk, which was subsequently validated in the validation subset (Fig. ). We defined the starting point as CLD and the endpoint as cirrhosis , both determined through clinical diagnoses obtained from electronic hospital health records or in cases of death or surgery related to the disease. Individuals diagnosed with liver fibrosis or cirrhosis at the baseline were excluded from the study. Supplementary Table provides detailed disease codes and definitions. Detailed definitions of the diseases and predictors used in this study can be found in Supplementary Table . We extracted cirrhosis-related predictors from the UKB dataset . Supplementary Table provides detailed descriptions of all relevant data fields, diseases, and associated information. Independent predictors of liver fibrosis severity include age, male sex, obesity, hypertension, diabetes, elevated alanine aminotransferase (ALT), elevated aspartate aminotransferase (AST), reduced platelet count (PLT), decreased albumin (Alb), and the presence of fatty liver and hepatic steatosis on ultrasound . Upon enrollment, we first tested the association between each metabolite and cirrhosis events and identified a series of significantly correlated metabolites. For risk prediction, we incorporated various factors into the risk assessments, including sociodemographic factors (age, sex), patient history (smoking, alcohol consumption, sleep patterns), physical measurements (body mass index [BMI], systolic blood pressure, blood pressure, blood glucose), and clinical chemistry markers (liver enzymes, albumin, and additional relevant metrics). Potential confounders, including alcohol consumption, medication use, and dietary habits, were accounted for in the analysis framework. Alcohol consumption was adjusted using self-reported data on drinking frequency and intensity from the UK Biobank questionnaire. Medication use was addressed by excluding individuals taking lipid-lowering drugs with known effects on metabolic profiles. Although detailed dietary information was not available, the metabolites included in the analysis indirectly reflect nutritional status and dietary patterns. Systolic blood pressure was recorded twice, and a lower reading was used. If there was an error in the automated measurements, manual readings were recorded. We utilized a total of five models, with the base model being the “metabolomics " model, which focused on the specificity of metabolomics. To predict cirrhosis risk, we employed the FIB-4 and APRI models, both of which are widely used for assessing the risk of cirrhosis development in patients with CLD. The FIB-4 model incorporates four parameters: age, PLT, ALT, and AST levels. The APRI model is based on the ALT to AST ratio and platelet counts. We further developed APRI + metabolomics and FIB-4 + metabolomics models by integrating metabolomics data with commonly used APRI and FIB-4 models to assess whether adding metabolomics data could improve model performance. All metabolites were log-normalized and standardized to a mean of 0 and standard deviation of 1 to reduce skewness, ensure comparability across metabolites, and eliminate biases introduced by differences in their original scales, thereby enhancing the robustness and interpretability of the model. The Cox-PH model was used to assess the risk of cirrhosis associated with each metabolite. The model was adjusted for age, sex, and complete set of CLD risk factors. The Benjamini-Hochberg method was applied to correct for multiple comparisons of P -values. To identify the most predictive metabolites, we applied a Cox-PH model with EN regularization to the training set, optimizing model discrimination using 10-fold cross-validation with performance assessed by Harrell’s C-index. To identify the most predictive metabolites, we applied a Cox proportional hazards model with EN regularization to the training set. EN regularization combines the benefits of both L1 and L2 penalties, balancing sparsity and model complexity by adjusting the α parameter (0 ≤ α ≤ 1), which controls the weight ratio between L1 (lasso) and L2 (ridge) regularization. The L1 component promotes variable selection, while the L2 component enhances model stability. We optimized the model’s performance using 10-fold cross-validation to determine the optimal α and λ (regularization strength) parameters, maximizing the Cox model’s discriminatory power (e.g., Harrell’s C-index) and minimizing overfitting. The predictive accuracy of the final model was evaluated in the validation set using metrics such as Harrell’s C-index, sensitivity, Youden’s index specificity, and net reclassification improvement (NRI). Additionally, we performed receiver operating characteristic (ROC) curve and decision curve analyses, stratifying individuals into quintiles based on the predicted cirrhosis risk. The performance of the model was further validated through calibration and network visualization. We calculated and visualized the Spearman’s correlation of the model features using the corrplot package. Network visualization followed the standard weighted gene co-expression network analysis (WGCNA) workflow, where metabolite correlations were calculated and converted into an adjacency matrix with a soft threshold of β = 30. The resulting topological overlap matrix was processed using a hard threshold (threshold = 0.2) to construct an unweighted network graph. Metabolite nodes included in the final APRI + metabolomics model are highlighted with saturated colors, and node sizes were adjusted according to the degree of connectivity. The entire network was generated and visualized using WGCNA and igraph packages. Key metabolites were identified from the model using EN Cox regression. Metabolites with non-zero coefficients in the final model were considered critical, and their importance was ranked based on the magnitude of their coefficients. To further elucidate the biological pathways these metabolites are involved in, we performed pathway enrichment analysis using the WebGestalt tool ( https://www.webgestalt.org ). The key metabolites were mapped to their corresponding pathways using the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. All analyses were performed using R (v.4.3.3) software. Statistical significance was controlled using the Benjamini-Hochberg method, with significance defined as an adjusted P -value of less than 0.05. These data are available on the UKB website. Baseline characteristics Following strict selection criteria, the final study cohort included 2,738 patients with CLD (Table ). The median age of the eligible participants was 56 years (interquartile range: 50–62 years), and 51.4% were male. During the study period, individuals who developed cirrhosis ( n = 142 [5.2%]) tended to be older. Significant differences were observed in baseline characteristics and event occurrence rates, particularly in various sociodemographic factors, such as the Townsend deprivation index (TDI) and alcohol consumption frequency. Participants with higher TDI scores were more likely to progress to cirrhosis. Lower socioeconomic status and greater social disadvantages were associated with cirrhosis ( P = 0.002). Frequency of alcohol consumption was also associated with cirrhosis, with more frequent drinking being linked to a higher incidence of cirrhosis ( P < 0.001). Clinical chemistry indicators, such as ALT, AST, alkaline phosphatase (ALP), PLT, Alb, total bilirubin (TBil), and direct bilirubin (DBil), as well as commonly used scores, such as APRI and FIB-4, were significantly associated with cirrhosis. These findings are consistent with expectations based on the literature . Relationship between individual metabolites and cirrhosis To examine the relationship between individual metabolites and the risk of cirrhosis, we used the Cox proportional hazards model to evaluate the association between these metabolites and the progression of CLD to cirrhosis. After adjusting for age and sex, 68 of 168 metabolites (40.5%) were found to be significantly associated with cirrhosis events. However, after further adjustment for all characteristics, the number of significantly associated metabolites decreased to 21 (12.5%). Notably, only 10 metabolites (5.9%) showed consistent significance in both models (detailed statistical results for all metabolites are provided in Supplementary Tables and ). In the age- and sex-adjusted models (Figs. A and A), we observed that most lipoprotein particles, triglycerides (very low-density lipoprotein [VLDL]), phospholipids (VLDL, low-density lipoprotein [LDL], and high-density lipoprotein [HDL]), and total lipids (VLDL and HDL particles) were positively associated with cirrhosis events, whereas other metabolites, such as certain amino acids and free cholesterol, showed a negative association. After further adjustment for age, sex, lifestyle factors, and biochemical measurements (Figs. B and B), the strength of the associations between phospholipids, amino acids, free cholesterol, and total lipids decreased. However, the association with lipoprotein particles became stronger, indicating that these metabolites remained significantly correlated with cirrhosis, even after adjusting for factors such as BMI, liver function tests, smoking, and alcohol consumption. In both Cox proportional hazards model analyses, 10 overlapping metabolites were identified. These metabolites are primarily distributed across the following categories: lipoprotein particle size, lipoprotein particle concentration, cholesteryl esters, phospholipids, and total lipids. Notably, the hazard ratios (HRs) for these metabolites showed consistent associations in both models, indicating that they may be significantly linked to the development of cirrhosis. These findings suggest that certain metabolites may serve as potential biomarkers for cirrhosis risk in patients with CLD. Supplementary Tables and provide detailed results from individual metabolite analyses, highlighting the complexity and specificity of these associations. Application of serum metabolomics in cirrhosis risk stratification Our dataset was divided into derivation (80%) and validation (20%) cohorts, with well-balanced baseline characteristics and cirrhosis outcomes between the two groups (Supplementary Table ). In the derivation cohort, we fitted elastic net regularized risk models. The five models included a metabolomics-only model (36 out of 168 metabolites), FIB-4 model, APRI model, FIB-4 + metabolomics model (18 out of 168 metabolites), and APRI + metabolomics model (22 out of 168 metabolites) (Supplementary Tables and for feature coefficients and optimized model hyperparameters). Figure A shows the ROC curves of the five models. The metabolomics-only, FIB-4 model had an AUC of 0.696, and APRI models had AUC of 0.712, 0.696, and 0.718, respectively. The FIB-4 + metabolomics model, which combines FIB-4 and metabolomics data, achieved an AUC of 0.717, demonstrating a better discriminative ability than using FIB-4 or metabolomics alone. The APRI + metabolomics model had the highest AUC of 0.747, making it the most effective in terms of discriminative power among all models. Figure B shows the net clinical benefit of the models at different threshold probabilities. Net benefit measures the potential clinical value of the model across various thresholds. Models that incorporated metabolomics data outperformed those that used only traditional parameters across most thresholds, showing a higher net benefit. The APRI + metabolomics model demonstrated the highest net benefit at nearly all thresholds, indicating its potential for clinical application. Table provides the internal model validation statistics to assess absolute discriminative ability. The APRI + metabolomics model had the highest C-index (0.747), indicating its strongest ability to predict cirrhosis in patients. The delta C-statistic (ΔC) represents the difference in the C-index between models and is used to evaluate the performance improvement after incorporating metabolomics data. Based on our results. To further assess the degree of classification improvement, we compared the models in pairs (Table ), including FIB-4 versus APRI, metabolomics versus FIB-4 + metabolomics, metabolomics versus APRI + metabolomics, FIB-4 versus FIB-4 + metabolomics, APRI versus APRI + metabolomics, and FIB-4 + metabolomics versus APRI + metabolomics. Subsequent performance evaluations indicated that incorporating metabolomic data improved the discriminative ability of the models. For instance, the FIB-4 + metabolomics model showed an increase in the C-index of 0.021 [0.014–0.028] compared to FIB-4 alone, with an NRI of 0.504 [0.488–0.520] ( P < 0.001). Similarly, the APRI + metabolomics model showed an improvement of 0.029 [0.022–0.035] in the C-index compared to APRI alone, with an NRI of 0.378 [0.366–0.389] ( P < 0.001). These findings suggest that the addition of metabolomics data enhances classification. However, the NRI of the APRI + metabolomics model indicates a significant improvement in case classification but a negative NRI for non-cases, meaning that some low-risk individuals may be misclassified as high-risk. The results of internal validation metrics are summarized in Supplementary Table , with relative performance metrics detailed in Supplementary Table . Overall, the APRI + metabolomics model demonstrated superior performance in terms of both discriminative ability (C-index) and clinical net benefit, making it more effective for risk classification and the prediction of cirrhosis progression. Metabolomics and cirrhosis risk stratification To further explore the translational potential of the risk models, we assessed the cumulative cirrhosis incidence, model calibration, and cirrhosis-free survival based on the predicted risk. The inclusion of serum metabolomics in the FIB-4 and APRI models improved risk stratification across the cirrhosis incidence quintiles (Fig. A). Model calibration was evaluated for all individuals who completed the 10-year follow-up, and we found that all models were fairly and similarly calibrated, except for the metabolomics-only model, which was less accurate (Fig. B). Finally, Kaplan-Meier curves highlighted superior survival stratification over the entire follow-up period in models that included metabolomics, particularly FIB-4 + and APRI + metabolomics (Fig. C), with P < 0.05. Model selection and feature retention Reducing the number of measured features is crucial for a cost-effective clinical implementation and preventive screening. In our study, we used an EN-regularized Cox regression model to select the features that carried the most predictive information. Figure A illustrates this approach, in which the network visualization of metabolites highlights the effective representation of highly connected clusters by individual metabolites. The final model retained several unrelated metabolite features (Fig. B). A total of 17 metabolites were retained, including 6 amino acids, 2 fatty acids, and 9 lipoprotein subclasses (for all clinical and metabolite feature coefficients, see Supplementary Table ). Network visualization of the measured metabolites demonstrated that individual metabolites effectively represented highly correlated clusters. Key metabolites identified and pathway enrichment analysis The APRI + Metabolomics model identified 21 key metabolites across several functional categories as significant predictors of cirrhosis risk. Amino acids, including branched-chain amino acids (valine, leucine, isoleucine), glutamine, and glycine, were negatively associated with cirrhosis, suggesting their roles in maintaining protein synthesis and nitrogen balance. In contrast, phenylalanine and tyrosine were positively associated, reflecting hepatic dysfunction and metabolic stress. Fatty acids, such as docosahexaenoic acid (DHA) and monounsaturated fatty acids (MUFAs), indicated profound disturbances in lipid metabolism and inflammation, which are key drivers of cirrhosis progression. Cholesterol and lipoprotein markers, including LDL cholesterol, VLDL cholesterol, and total cholesterol minus HDL-C, revealed lipid overload and hepatocyte injury, while alterations in phospholipids and total lipids in large VLDL and very large HDL pointed to membrane instability and metabolic dysregulation. Free cholesterol in small VLDL further implicated oxidative stress as a contributing factor. Among these metabolites, DHA, MUFAs, and total lipids in large VLDL emerged as the most influential, highlighting the critical role of lipid metabolism disturbances in cirrhosis pathogenesis. The metabolic pathway enrichment analysis (Fig. ) identified multiple pathways disrupted during cirrhosis progression, underscoring their biological relevance in the disease. The most enriched pathway was phenylalanine and tyrosine metabolism, highlighting its central role in nitrogen imbalance and impaired hepatic clearance, both hallmark features of advanced liver disease. Other significantly enriched pathways included porphyrin metabolism, linked to disruptions in heme biosynthesis and oxidative stress, and nitrogen metabolism, reflecting urea cycle dysfunction and impaired ammonia detoxification. The degradation of valine, leucine, and isoleucine, representing branched-chain amino acid catabolism, was also notably enriched, suggesting impaired protein metabolism and reduced energy production. Additional enrichments were observed in pathways such as retinol metabolism, phosphatidylinositol phosphate metabolism, and amino sugar metabolism, indicating broader alterations in lipid signaling, vitamin metabolism, and carbohydrate utilization. Pathways like arginine and proline metabolism, along with purine metabolism, further emphasized the extensive metabolic reprogramming characteristic of cirrhosis. Following strict selection criteria, the final study cohort included 2,738 patients with CLD (Table ). The median age of the eligible participants was 56 years (interquartile range: 50–62 years), and 51.4% were male. During the study period, individuals who developed cirrhosis ( n = 142 [5.2%]) tended to be older. Significant differences were observed in baseline characteristics and event occurrence rates, particularly in various sociodemographic factors, such as the Townsend deprivation index (TDI) and alcohol consumption frequency. Participants with higher TDI scores were more likely to progress to cirrhosis. Lower socioeconomic status and greater social disadvantages were associated with cirrhosis ( P = 0.002). Frequency of alcohol consumption was also associated with cirrhosis, with more frequent drinking being linked to a higher incidence of cirrhosis ( P < 0.001). Clinical chemistry indicators, such as ALT, AST, alkaline phosphatase (ALP), PLT, Alb, total bilirubin (TBil), and direct bilirubin (DBil), as well as commonly used scores, such as APRI and FIB-4, were significantly associated with cirrhosis. These findings are consistent with expectations based on the literature . To examine the relationship between individual metabolites and the risk of cirrhosis, we used the Cox proportional hazards model to evaluate the association between these metabolites and the progression of CLD to cirrhosis. After adjusting for age and sex, 68 of 168 metabolites (40.5%) were found to be significantly associated with cirrhosis events. However, after further adjustment for all characteristics, the number of significantly associated metabolites decreased to 21 (12.5%). Notably, only 10 metabolites (5.9%) showed consistent significance in both models (detailed statistical results for all metabolites are provided in Supplementary Tables and ). In the age- and sex-adjusted models (Figs. A and A), we observed that most lipoprotein particles, triglycerides (very low-density lipoprotein [VLDL]), phospholipids (VLDL, low-density lipoprotein [LDL], and high-density lipoprotein [HDL]), and total lipids (VLDL and HDL particles) were positively associated with cirrhosis events, whereas other metabolites, such as certain amino acids and free cholesterol, showed a negative association. After further adjustment for age, sex, lifestyle factors, and biochemical measurements (Figs. B and B), the strength of the associations between phospholipids, amino acids, free cholesterol, and total lipids decreased. However, the association with lipoprotein particles became stronger, indicating that these metabolites remained significantly correlated with cirrhosis, even after adjusting for factors such as BMI, liver function tests, smoking, and alcohol consumption. In both Cox proportional hazards model analyses, 10 overlapping metabolites were identified. These metabolites are primarily distributed across the following categories: lipoprotein particle size, lipoprotein particle concentration, cholesteryl esters, phospholipids, and total lipids. Notably, the hazard ratios (HRs) for these metabolites showed consistent associations in both models, indicating that they may be significantly linked to the development of cirrhosis. These findings suggest that certain metabolites may serve as potential biomarkers for cirrhosis risk in patients with CLD. Supplementary Tables and provide detailed results from individual metabolite analyses, highlighting the complexity and specificity of these associations. Our dataset was divided into derivation (80%) and validation (20%) cohorts, with well-balanced baseline characteristics and cirrhosis outcomes between the two groups (Supplementary Table ). In the derivation cohort, we fitted elastic net regularized risk models. The five models included a metabolomics-only model (36 out of 168 metabolites), FIB-4 model, APRI model, FIB-4 + metabolomics model (18 out of 168 metabolites), and APRI + metabolomics model (22 out of 168 metabolites) (Supplementary Tables and for feature coefficients and optimized model hyperparameters). Figure A shows the ROC curves of the five models. The metabolomics-only, FIB-4 model had an AUC of 0.696, and APRI models had AUC of 0.712, 0.696, and 0.718, respectively. The FIB-4 + metabolomics model, which combines FIB-4 and metabolomics data, achieved an AUC of 0.717, demonstrating a better discriminative ability than using FIB-4 or metabolomics alone. The APRI + metabolomics model had the highest AUC of 0.747, making it the most effective in terms of discriminative power among all models. Figure B shows the net clinical benefit of the models at different threshold probabilities. Net benefit measures the potential clinical value of the model across various thresholds. Models that incorporated metabolomics data outperformed those that used only traditional parameters across most thresholds, showing a higher net benefit. The APRI + metabolomics model demonstrated the highest net benefit at nearly all thresholds, indicating its potential for clinical application. Table provides the internal model validation statistics to assess absolute discriminative ability. The APRI + metabolomics model had the highest C-index (0.747), indicating its strongest ability to predict cirrhosis in patients. The delta C-statistic (ΔC) represents the difference in the C-index between models and is used to evaluate the performance improvement after incorporating metabolomics data. Based on our results. To further assess the degree of classification improvement, we compared the models in pairs (Table ), including FIB-4 versus APRI, metabolomics versus FIB-4 + metabolomics, metabolomics versus APRI + metabolomics, FIB-4 versus FIB-4 + metabolomics, APRI versus APRI + metabolomics, and FIB-4 + metabolomics versus APRI + metabolomics. Subsequent performance evaluations indicated that incorporating metabolomic data improved the discriminative ability of the models. For instance, the FIB-4 + metabolomics model showed an increase in the C-index of 0.021 [0.014–0.028] compared to FIB-4 alone, with an NRI of 0.504 [0.488–0.520] ( P < 0.001). Similarly, the APRI + metabolomics model showed an improvement of 0.029 [0.022–0.035] in the C-index compared to APRI alone, with an NRI of 0.378 [0.366–0.389] ( P < 0.001). These findings suggest that the addition of metabolomics data enhances classification. However, the NRI of the APRI + metabolomics model indicates a significant improvement in case classification but a negative NRI for non-cases, meaning that some low-risk individuals may be misclassified as high-risk. The results of internal validation metrics are summarized in Supplementary Table , with relative performance metrics detailed in Supplementary Table . Overall, the APRI + metabolomics model demonstrated superior performance in terms of both discriminative ability (C-index) and clinical net benefit, making it more effective for risk classification and the prediction of cirrhosis progression. To further explore the translational potential of the risk models, we assessed the cumulative cirrhosis incidence, model calibration, and cirrhosis-free survival based on the predicted risk. The inclusion of serum metabolomics in the FIB-4 and APRI models improved risk stratification across the cirrhosis incidence quintiles (Fig. A). Model calibration was evaluated for all individuals who completed the 10-year follow-up, and we found that all models were fairly and similarly calibrated, except for the metabolomics-only model, which was less accurate (Fig. B). Finally, Kaplan-Meier curves highlighted superior survival stratification over the entire follow-up period in models that included metabolomics, particularly FIB-4 + and APRI + metabolomics (Fig. C), with P < 0.05. Reducing the number of measured features is crucial for a cost-effective clinical implementation and preventive screening. In our study, we used an EN-regularized Cox regression model to select the features that carried the most predictive information. Figure A illustrates this approach, in which the network visualization of metabolites highlights the effective representation of highly connected clusters by individual metabolites. The final model retained several unrelated metabolite features (Fig. B). A total of 17 metabolites were retained, including 6 amino acids, 2 fatty acids, and 9 lipoprotein subclasses (for all clinical and metabolite feature coefficients, see Supplementary Table ). Network visualization of the measured metabolites demonstrated that individual metabolites effectively represented highly correlated clusters. The APRI + Metabolomics model identified 21 key metabolites across several functional categories as significant predictors of cirrhosis risk. Amino acids, including branched-chain amino acids (valine, leucine, isoleucine), glutamine, and glycine, were negatively associated with cirrhosis, suggesting their roles in maintaining protein synthesis and nitrogen balance. In contrast, phenylalanine and tyrosine were positively associated, reflecting hepatic dysfunction and metabolic stress. Fatty acids, such as docosahexaenoic acid (DHA) and monounsaturated fatty acids (MUFAs), indicated profound disturbances in lipid metabolism and inflammation, which are key drivers of cirrhosis progression. Cholesterol and lipoprotein markers, including LDL cholesterol, VLDL cholesterol, and total cholesterol minus HDL-C, revealed lipid overload and hepatocyte injury, while alterations in phospholipids and total lipids in large VLDL and very large HDL pointed to membrane instability and metabolic dysregulation. Free cholesterol in small VLDL further implicated oxidative stress as a contributing factor. Among these metabolites, DHA, MUFAs, and total lipids in large VLDL emerged as the most influential, highlighting the critical role of lipid metabolism disturbances in cirrhosis pathogenesis. The metabolic pathway enrichment analysis (Fig. ) identified multiple pathways disrupted during cirrhosis progression, underscoring their biological relevance in the disease. The most enriched pathway was phenylalanine and tyrosine metabolism, highlighting its central role in nitrogen imbalance and impaired hepatic clearance, both hallmark features of advanced liver disease. Other significantly enriched pathways included porphyrin metabolism, linked to disruptions in heme biosynthesis and oxidative stress, and nitrogen metabolism, reflecting urea cycle dysfunction and impaired ammonia detoxification. The degradation of valine, leucine, and isoleucine, representing branched-chain amino acid catabolism, was also notably enriched, suggesting impaired protein metabolism and reduced energy production. Additional enrichments were observed in pathways such as retinol metabolism, phosphatidylinositol phosphate metabolism, and amino sugar metabolism, indicating broader alterations in lipid signaling, vitamin metabolism, and carbohydrate utilization. Pathways like arginine and proline metabolism, along with purine metabolism, further emphasized the extensive metabolic reprogramming characteristic of cirrhosis. Cirrhosis is a major cause of morbidity and mortality in patients with CLD worldwide, and the number of cirrhosis-related deaths is expected to increase over the next decade . Therefore, greater efforts are needed to promote primary prevention, early detection of cirrhosis, and improved access to treatment. Allocating more resources for primary prevention, early diagnosis of cirrhosis, and better integration with healthcare services is crucial to reduce the global burden of cirrhosis . Identifying individuals at high risk of developing cirrhosis is essential for lowering liver disease-related mortality. The etiology of cirrhosis is changing owing to the rising prevalence of obesity, increased alcohol consumption, and improved management of hepatitis B and C infections . These factors shift the epidemiology and burden of cirrhosis. Previous models often focused on a single disease etiology and were not well equipped to address the evolving spectrum of CLD. When multiple liver diseases co-occur, these models tend to perform poorly. Therefore, there is an urgent need for a tool that is applicable to all types of CLD and offers higher diagnostic accuracy to identify individuals at risk of progressive disease. Early intervention in high-risk populations before clinical symptoms emerge could potentially reverse early stage liver fibrosis. In this study, we described the association between individual metabolites and cirrhosis events, which is consistent with previously reported findings and established pathological mechanisms. The accumulation of metabolic dysfunctions involving amino acids, lipids, carbohydrates, and fatty acids leads to oxidative stress and damage to hepatocytes, driving CLD to more severe pathological stages . Because the liver is the primary organ for amino acid metabolism, impaired liver function leads to altered amino acid metabolism. Previous studies have shown that early cirrhosis causes an imbalance in peripheral blood amino acids, characterized by a decrease in branched-chain amino acids (BCAAs) and an increase in aromatic amino acids (AAAs) , such as phenylalanine and tyrosine, due to impaired hepatic clearance. The enrichment of phenylalanine and tyrosine metabolism pathways highlights the accumulation of AAAs, which has been associated with nitrogen imbalance and complications such as hepatic encephalopathy . BCAA deficiency can occur as early as the chronic hepatitis stage prior to cirrhosis development, contributing to reduced Alb synthesis and diminished antioxidant capacity, further exacerbating disease progression in cirrhosis . BCAAs play an important role in improving immune function and reducing oxidative stress , suggesting that targeted BCAAs supplementation could help address these metabolic derangements. At this stage, the ability of the liver to repair tissues is overwhelmed, leading to the progression of fibrosis . These findings were consistent with previously reported associations and established pathological mechanisms. Adjusting amino acid metabolism, such as supplementing BCAAs or targeting AAAs imbalances, could help mitigate oxidative stress, improve immune function, and slow the progression of liver fibrosis. Metabolic abnormalities reduce antioxidant capacity and increase lipotoxicity, thereby increasing the risk of cirrhosis . Hepatic lipotoxicity, characterized by the ectopic accumulation of triglycerides and their intermediates, leads to hepatocyte injury and structural changes within the liver . Lipid overload triggers apoptotic cascades with subsequent caspase activation, potentially promoting inflammation and fibrosis . Cholesterol crystals can activate NLRP3 inflammasome, leading to hepatocyte inflammation. At the tissue level, repair and remodeling processes occur, resulting in fibrosis. Over time, sustained lipid overload induces oxidative stress, ultimately leading to fibrosis formation . Hepatocytes are particularly vulnerable to the “multiple hits” characterized by oxidative stress, which further promotes the synthesis of pro-inflammatory cytokines, driving disease progression . The enrichment of pathways such as retinol metabolism , phosphatidylinositol phosphate metabolism , and porphyrin metabolism underscores the systemic nature of lipid metabolic dysfunction in cirrhosis. Elevated DHA levels, although typically anti-inflammatory, may reflect maladaptive responses to chronic inflammation in advanced liver disease. Similarly, reductions in MUFAs and alterations in total lipids in large VLDL particles highlight lipid imbalances that exacerbate hepatocyte injury . Lipid peroxides and products from damaged hepatocytes further activate hepatic stellate cells, driving their transition into myofibroblast-like cells and promoting fibrotic remodeling . These findings highlight the critical role of lipid metabolism in cirrhosis progression and suggest potential therapeutic targets. Anti-inflammatory interventions and strategies to restore lipid balance could mitigate hepatocyte injury and fibrosis, offering promising avenues for cirrhosis management. In the context of our study, we accounted for several important factors to mitigate potential biases and improve the reliability of our results. First, we addressed variable selection bias by employing Elastic Net regularization combined with 10-fold cross-validation, ensuring that the metabolites selected for the models were robust predictors of cirrhosis risk. Regarding the healthy volunteer bias inherent in the UK Biobank data, which may overrepresent healthier and older individuals, we acknowledge that this could limit the generalizability of our findings to younger or more diverse populations. However, the results still provide valuable insights, and future studies should aim to validate the models in broader cohorts with varying disease severity and comorbidities to better understand their applicability in real-world clinical settings. To address data imbalance, we used random sampling to divide the dataset into a derivation cohort (80%) and a validation cohort (20%) without stratified sampling. This approach aimed to preserve heterogeneity within the sample, improving the model’s generalizability. Additionally, batch effects were minimized through rigorous data processing and standardization to ensure comparability of metabolite concentrations across different batches. For multiple comparison correction, we applied the Benjamini-Hochberg method to adjust p -values, reducing the likelihood of false positives. These strategies enhanced the robustness of our model while providing a clear framework for future improvements in clinical validation. We developed five models, and the results showed that the APRI + metabolomics model performed best in cirrhosis risk stratification, demonstrating a strong discriminative ability. Serum metabolomic analysis based on 1 H-NMR is not only reliable but also cost-effective, providing comprehensive systemic metabolic information from a single blood sample. The strength of this study lies in its large sample size and the exclusion of patients with pre-existing cirrhosis, thereby minimizing bias related to treatments, such as lipid-lowering therapies. While this study demonstrates the utility of 1 H-NMR in identifying predictive metabolites for cirrhosis, it is important to acknowledge the platform’s limitations. Compared to mass spectrometry (MS), 1 H-NMR has lower sensitivity and may not capture low-abundance metabolites or complex chemical structures . Particularly, the non-destructive nature, low sample preparation requirements, and reproducibility of the platform make it well-suited for large-scale studies . Furthermore, combining 1 H-NMR data with MS in future analyses could provide a broader metabolic profile, improving the accuracy and generalizability of predictive models. Despite this, we acknowledge that excluding individuals using lipid-lowering medications may have introduced potential biases. Lipid-lowering medications can significantly impact lipid metabolism and, consequently, the metabolomic profiles. While excluding this group minimizes confounding effects, it may limit the generalizability of the findings to broader populations where such medications are commonly used. Future studies could include these individuals and adjust for the effects of lipid-lowering therapies to better assess their impact on cirrhosis risk prediction. This is the first study to provide a detailed description of cirrhosis risk stratification using metabolomics. We validated clinical risk scores based on disease specificity using a well-established metabolomics platform that has received regulatory approval. However, several challenges remain in their clinical application. First, it is well known that the UK population does not fully represent the global population, as participants in this study tend to be older and healthier than the general population (the “healthy volunteer” bias). Another limitation of this study was the absence of liver elastography data. While we validated the internal effectiveness of the models, the results require external validation in other populations to enhance their generalizability. External validation in wider and more heterogeneous populations is necessary to assess the robustness and generalizability of the model. Real-world clinical settings could benefit from integrating data across different ethnicities, age groups, comorbidities, and disease etiologies, providing valuable insights into the model’s utility. Additionally, integrating metabolomics data from platforms like mass spectrometry could offer a more comprehensive metabolic profile, likely improving the model’s performance in diverse populations. Moreover, the blood samples were collected in a non-fasting state and stored for extended periods before analysis, which may have introduced some variation, potentially underestimating the performance of the metabolomics models. The fasting state can be a significant confounding factor in metabolomics research, as it can alter levels of certain metabolites, particularly those related to short-term energy metabolism, such as glucose and ketone bodies . However, previous studies have shown that many key metabolites, including amino acids, certain lipids, and free fatty acids, are minimally affected by fasting This supports the robustness of our findings, as the primary metabolites identified in our study-free fatty acids and amino acids-are less sensitive to fluctuations due to fasting status. Future studies should aim to minimize variability by standardizing sampling conditions, including documenting fasting duration during sample collection. Comparing metabolite levels between fasting and non-fasting states would help assess how variable influences model performance. Furthermore, validating models in independent cohorts with strict fasting controls could ensure the robustness and generalizability of the findings. Addressing these considerations will enable metabolomics studies to better capture disease-specific metabolic features and improve the reliability of predictive models. Potential confounders such as alcohol consumption, medication use, and dietary habits may have also influenced the metabolomic profiles and progression of cirrhosis. While self-reported data on alcohol consumption and medication use were adjusted for in the analysis, detailed dietary information was unavailable. This limitation highlights the need for future studies to collect comprehensive lifestyle and dietary data and conduct sensitivity analyses to evaluate their impact on predictive models. Finally, the metabolomics platform used in this study covered a wide range of lipids and lipid subtypes, offering rich data for further exploration of the relationship between metabolites and cirrhosis risk. In conclusion, we have demonstrated that 1 H-NMR serum metabolomics can effectively serve as a standalone screening tool for assessing the risk of cirrhosis. We also showed that machine learning can be successfully applied to reduce the number of features used for risk prediction while maintaining strong predictive performance. In the context of cirrhosis, several large metabolite clusters can be efficiently represented by single key metabolites with high predictive value. This approach presents a significant potential for cost-effective implementation, which could facilitate its adoption in clinical practice. Below is the link to the electronic supplementary material. Supplementary Material 1 |
The impact of gender medicine on neonatology: the disadvantage of being male: a narrative review | 453bc4bb-5782-4b12-af0f-f025e924f6c9 | 10245647 | Pediatrics[mh] | The aim of gender medicine is to pursue accuracy and personalization of diagnosis and treatment based on gender evidences. Scientific studies, aimed at maximize the prevention programs, stress that there is a biological difference related to some diseases onset and to some drugs response. The difference by sex on neonatal mortality, calculated considering race and birth weight, was already known at the beginning of the last century , and it is inversely related to gestational age. Recent data from the international database “Vermont Oxford Network” showed gender differences in both mortality and postnatal outcomes, with the worst prognosis for the male population . Despite studies identifying sex as risk factor for some diseases in the last decades have progressively increased, such phenomenon is being understood only recently. This was possible analysing the development of organs and systems, as well as their functional recovery capacity following any injuries. We reviewed the articles published around the last twenty years on PubMed/Medline ( http://www.ncbi.nlm.nih.gov/PubMed ), Embase ( https://www.embase.com/search/quick ) and Ovid ( http://www.ovid.com/ ) using the following terms: gender medicine, sex, newborn, and preterm. Forty-seven articles meeting the criteria were included. Papers concerning diagnostic or surgical procedures related to congenital malformations, syndromes involving the sexual apparatus, and lesions of the sexual organs were excluded. Prenatal aspects and childbirth Higher incidence of congenital diseases, preterm birth, and premature rupture of membranes have been observed in pregnancies with male newborns . Women carrying male foetuses had higher rates of gestational diabetes mellitus, foetal macrosomia, failure to progress during the first and second stages of labor, cord prolapse, nuchal cord, and true umbilical cord knots. Higher incidence of caesarean sections (CS) and preterm births with a higher overall mortality rate have been found in male neonates . The reasons are various and not fully understood yet. A study on women undergoing elective CS without complications showed a higher pro-inflammatory response in the plasma of male infants subjected to lipopolysaccharide stimulation . Such a result could be a reason for premature membranes’ rupture and might explain the different reaction to neonatal infections. The rate of free beta-hCG is, in the first trimester, significantly higher in the female foetuses, while the contrary is for the pregnancy-associated plasma protein-A levels. That may explain an increased risk for Down syndrome reported in pregnancies with female foetuses, but without statistical significance . Knippel et al. found higher levels of alphafetoprotein (AFP) in male foetuses and, due to the higher incidence of malformations AFP-related in females, hypothesized a protective role for this protein. A possible additional reason justifying the disadvantage of male birth is the increased metabolic request due to the acceleration of growth, which causes greater vulnerability to minimal reductions in fetal oxygenation and blood flow, during both pregnancy and labor. Such differences, albeit small in oxygenation and lactacidemia, could increase susceptibility to early neonatal infections , explaining the worst outcome in case of an adverse event. Drugs in pregnancy and postnatal effects Antenatal drugs exposure causes different effects depending on sex. Neonatal abstinence syndrome secondary to prenatal opioid exposure is significantly more frequent and severe in the male population , while females show more benzodiazepine deficiency symptoms, either alone or combined with opioid . Among opioid-exposed infants, antidepressant medication co-exposure is common. Some studies, addressed to investigate its long-term effects, reported a worse influence on male neonates, mainly resulted in an increasing length of hospitalization, but without statistical significance . Infant gut microbiota, without differences between the sexes at birth but with small variations due to the type of delivery, can be influenced by maternal drugs assumption. In a group of newborns with mothers requiring anti-asthma treatments during pregnancy, the amount of Lactobacilli in faeces resulted significantly lower in males , while a higher value of Bacteroidacæ has been recorded in females. Postnatal period postnatal adaptation is sex-related, due to both the higher prevalence of associated complications and the ability to recover from an adverse event . A neurobehavioral follow-up study on premature babies born at less than 28 weeks of gestational age found a lower incidence of complications such as cerebral palsy, deafness, blindness, and mental or psychomotor retardation in females . Recently, a meta-analysis involving 41 studies and 625,680 neonates confirmed this theory and demonstrated greater clinical instability and need for invasive interventions in preterm males. Additionally, it reported higher rates of bronchopulmonary dysplasia (BPD), retinopathy of prematurity, necrotizing enterocolitis, intraventricular haemorrhage, and periventricular leukomalacia . Although geographic factors, proper perinatal care and gestational age can reduce the gap between sexes, the feeling of generic “weakness” remains in males. Of note is its persistence even after hospital discharge, and especially for respiratory infections , until the first year of life . Gender differences in the pulmonary develop are noticeable as early as 16–20 weeks of gestation. Mouth movements, related to both swallowing and intrauterine “respiratory function”, are more frequent in female fetuses . Conversely, animal studies reported lower lung tissue stability , reduced gas exchange with no improvement in respiratory mechanics after steroid treatment , and an increased risk of lung injury due to hyperoxia in males. A possible reason for this might be connected with the sex hormones levels circulating in the prenatal period. The amount of estrogen and progesterone is comparable between genders, as they both result from transplacental passage, but testosterone levels are higher in males . Another hypothesis focuses on the alveolar epithelial transport of Na + as a determinant of the perinatal pulmonary transition , with differences among the sexes. Finally, a study carried out by measuring the diversity in the expression of microRNA during fetal lung development hypothesized its role as a cofactor in lung diseases, both in neonatal and adulthood . Furthermore, it is possible to hypothesize that delayed lung development observed in male newborns causes a gap between the development of the airways and the lung parenchyma, thereby increasing airway resistance. Overall, female fetuses produce surfactant earlier, move their mouths more, develop larger airways that are less reactive to insult, and develop more mature parenchyma. Therefore, males have a higher incidence and severity of respiratory distress syndrome, BPD, wheezing, asthma, and chronic diffuse interstitial lung disease, while cystic fibrosis is more severe in girls, who have a higher risk of complications and worse outcomes . Recently a large and comprehensive systematic review and meta-analysis of preterm babies with persistent patent ductus arteriosus (PDA) showed no difference between boys and girls in both the incidence and the response rate to pharmacological treatment . A common conception among neonatologists is the interaction between PDA and the respiratory evolution of the premature babies. The presence of a hemodynamically significant PDA is frequently suspected based on respiratory findings, such as increased oxygen or mechanical ventilation requirements. Although male gender is associated with an increased risk of RDS, higher rates of birth intubation, surfactant treatment, mechanical ventilation, and pneumothorax, the actual results suggest that the presence of PDA is unlikely to play a role in these sex differences in respiratory courses. On the other hand, congenital heart disease (CHD) is significantly influenced by gender, not only in terms of incidence and severity, but also in postnatal evolution and long-term outcomes. However, this influence is not universal, and varies depending on the type of anomaly considered . Females have a higher incidence of less serious CHD, such as interventricular and interatrial defects, pulmonary stenosis, and aortic coarctation, while major pathologies like Fallot tetralogy or left hypoplastic heart are more prevalent in males . After surgical treatment, the volume index and ventricular masses are larger in males, as in the normal healthy population. Right ventricular hypertrophy and dilatation correlate with loading conditions in a similar way for both sexes. However, under comparable loading conditions, males show more severe functional impairment . Although the clinical history of infants with CHD is related to gender, there is no significant prevalence for either sex. A higher mortality rate has been reported in older males with CHD, while sudden cardiac death is more prevalent in young males. However, mortality for CHD after surgery is higher among girls compared to boys, probably due to their smaller body size. Women are at higher risk of developing pulmonary arterial hypertension but at lower risk of adverse aortic outcomes, even though the possibility of them undergoing aortic surgery remains minimal. Moreover, females have a lower risk of infective endocarditis . Observations from clinical research in humans have suggested a difference in brain and neuronal physiology based on sex differences that begin in the fetal and newborn period, and extend throughout the human lifespan into adulthood. In premature infants, girls have significantly lower cerebral blood flow (CBF) than boys of similar gestational and postnatal age ; however, adult females have higher CBF than males. The mechanism regulating these differences are not well-understood, but the relative immaturity of CBF auto-regulation in premature infants may be the reason why females with relatively lower cerebral blood flow have a lesser incidence of germinal matrix or intraventricular haemorrhage. Pain sensitivity is another issue significantly connected with gender. Newborns and preterm male tolerate fewer painful stimuli , although there is a difference in which side of the body is involved . This is probably due to bilateral somatosensory cortical activation, which is less evident in females and persists until adolescence . Conversely, a study conducted on an ex-preterm cohort in adulthood showed a lower capacity to modulate pain in females with a consequent increased risk of developing persistent pathological pain, although the reason for this is still unclear . Differences between sexes exist in cellular and molecular development , which affect both normal neuronal function and the effectiveness of various therapies in cases of brain damage. However, the correlation with the behavioural and psychological aspects is still a matter of discussion . Sexual dimorphism of the fetus manifests during pregnancy. Intrauterine and postnatal growth nomograms are sex-specific. There is increasing evidence showing that from fetal life, boys and girls have different responses to maternal nutrition, and that maternal breastmilk composition differs based on fetal sex . Furthermore, early neonatal nutritional interventions affect boys and girls differently, and early nutrition has sex-specific effects on both body composition and neurodevelopmental outcomes . However, no studies have investigated whether nutritional requirements differ between the sexes. Thus, the current nutrition guidelines for preterm infants are unisex and could be sub-optimal. More information is needed to determine sex differences in infants’ macronutrient requirements, such as whether preterm females require higher fat intake and preterm males require higher protein intake for optimal growth and neurodevelopmental outcomes . Therapies effectiveness Pharmacological treatments have varying efficacy and side effects depending on a patient’s sex, especially in preterm population . Unfortunately, scientific literature seldom covers gender differences in infant pharmacology, whether in randomized controlled studies or meta-analyses. The pharmacological inhibition of prostaglandin synthesis has been shown to promote the stability of germinal matrix vessels and prevent intraventricular haemorrhage (IVH) in preterm rabbit pups. A similar effect has been reported in humans. Two large North American trials investigated the early use of intravenous indomethacin in preterm infants at high risk for IVH, and the results showed a significant reduction in severe intraventricular haemorrhage in only the male papulation . On the other hand, less positive long-term cognitive outcomes and a higher mortality rate were observed in female infants . Therefore, this prophylaxis appears to be as beneficial for males as potentially harmful for females. Conversely, hydrocortisone for BPD prophylaxis is more effective in females, increasing bronchopulmonary dysplasia-free survival rate . Experimental studies carried out by administering caffeine (an adenosine receptor antagonist) to rats have shown several positive effects on respiratory pattern, such as an increase in respiratory frequency in the early phase of response to hypoxia and in tidal volume in the late phase of response. This effect has been observed exclusively in male rats , probably due to the long-term effects on the nucleoside receptor system. In addition, the increased expression of the Adenosine (2 A) receptor, which is specific to male rats, may have affected adenosine-dopamine interactions that regulate chemosensory activity. Therapeutic hypothermia is a widely used procedure to protect neonates from hypoxic–ischaemic brain injury , which was found to be more effective in the female population, particularly in medium and long-term outcomes . For the same purpose, experimental treatment with the infusion of stem cells did not show differences between genders . Higher incidence of congenital diseases, preterm birth, and premature rupture of membranes have been observed in pregnancies with male newborns . Women carrying male foetuses had higher rates of gestational diabetes mellitus, foetal macrosomia, failure to progress during the first and second stages of labor, cord prolapse, nuchal cord, and true umbilical cord knots. Higher incidence of caesarean sections (CS) and preterm births with a higher overall mortality rate have been found in male neonates . The reasons are various and not fully understood yet. A study on women undergoing elective CS without complications showed a higher pro-inflammatory response in the plasma of male infants subjected to lipopolysaccharide stimulation . Such a result could be a reason for premature membranes’ rupture and might explain the different reaction to neonatal infections. The rate of free beta-hCG is, in the first trimester, significantly higher in the female foetuses, while the contrary is for the pregnancy-associated plasma protein-A levels. That may explain an increased risk for Down syndrome reported in pregnancies with female foetuses, but without statistical significance . Knippel et al. found higher levels of alphafetoprotein (AFP) in male foetuses and, due to the higher incidence of malformations AFP-related in females, hypothesized a protective role for this protein. A possible additional reason justifying the disadvantage of male birth is the increased metabolic request due to the acceleration of growth, which causes greater vulnerability to minimal reductions in fetal oxygenation and blood flow, during both pregnancy and labor. Such differences, albeit small in oxygenation and lactacidemia, could increase susceptibility to early neonatal infections , explaining the worst outcome in case of an adverse event. Antenatal drugs exposure causes different effects depending on sex. Neonatal abstinence syndrome secondary to prenatal opioid exposure is significantly more frequent and severe in the male population , while females show more benzodiazepine deficiency symptoms, either alone or combined with opioid . Among opioid-exposed infants, antidepressant medication co-exposure is common. Some studies, addressed to investigate its long-term effects, reported a worse influence on male neonates, mainly resulted in an increasing length of hospitalization, but without statistical significance . Infant gut microbiota, without differences between the sexes at birth but with small variations due to the type of delivery, can be influenced by maternal drugs assumption. In a group of newborns with mothers requiring anti-asthma treatments during pregnancy, the amount of Lactobacilli in faeces resulted significantly lower in males , while a higher value of Bacteroidacæ has been recorded in females. postnatal adaptation is sex-related, due to both the higher prevalence of associated complications and the ability to recover from an adverse event . A neurobehavioral follow-up study on premature babies born at less than 28 weeks of gestational age found a lower incidence of complications such as cerebral palsy, deafness, blindness, and mental or psychomotor retardation in females . Recently, a meta-analysis involving 41 studies and 625,680 neonates confirmed this theory and demonstrated greater clinical instability and need for invasive interventions in preterm males. Additionally, it reported higher rates of bronchopulmonary dysplasia (BPD), retinopathy of prematurity, necrotizing enterocolitis, intraventricular haemorrhage, and periventricular leukomalacia . Although geographic factors, proper perinatal care and gestational age can reduce the gap between sexes, the feeling of generic “weakness” remains in males. Of note is its persistence even after hospital discharge, and especially for respiratory infections , until the first year of life . Gender differences in the pulmonary develop are noticeable as early as 16–20 weeks of gestation. Mouth movements, related to both swallowing and intrauterine “respiratory function”, are more frequent in female fetuses . Conversely, animal studies reported lower lung tissue stability , reduced gas exchange with no improvement in respiratory mechanics after steroid treatment , and an increased risk of lung injury due to hyperoxia in males. A possible reason for this might be connected with the sex hormones levels circulating in the prenatal period. The amount of estrogen and progesterone is comparable between genders, as they both result from transplacental passage, but testosterone levels are higher in males . Another hypothesis focuses on the alveolar epithelial transport of Na + as a determinant of the perinatal pulmonary transition , with differences among the sexes. Finally, a study carried out by measuring the diversity in the expression of microRNA during fetal lung development hypothesized its role as a cofactor in lung diseases, both in neonatal and adulthood . Furthermore, it is possible to hypothesize that delayed lung development observed in male newborns causes a gap between the development of the airways and the lung parenchyma, thereby increasing airway resistance. Overall, female fetuses produce surfactant earlier, move their mouths more, develop larger airways that are less reactive to insult, and develop more mature parenchyma. Therefore, males have a higher incidence and severity of respiratory distress syndrome, BPD, wheezing, asthma, and chronic diffuse interstitial lung disease, while cystic fibrosis is more severe in girls, who have a higher risk of complications and worse outcomes . Recently a large and comprehensive systematic review and meta-analysis of preterm babies with persistent patent ductus arteriosus (PDA) showed no difference between boys and girls in both the incidence and the response rate to pharmacological treatment . A common conception among neonatologists is the interaction between PDA and the respiratory evolution of the premature babies. The presence of a hemodynamically significant PDA is frequently suspected based on respiratory findings, such as increased oxygen or mechanical ventilation requirements. Although male gender is associated with an increased risk of RDS, higher rates of birth intubation, surfactant treatment, mechanical ventilation, and pneumothorax, the actual results suggest that the presence of PDA is unlikely to play a role in these sex differences in respiratory courses. On the other hand, congenital heart disease (CHD) is significantly influenced by gender, not only in terms of incidence and severity, but also in postnatal evolution and long-term outcomes. However, this influence is not universal, and varies depending on the type of anomaly considered . Females have a higher incidence of less serious CHD, such as interventricular and interatrial defects, pulmonary stenosis, and aortic coarctation, while major pathologies like Fallot tetralogy or left hypoplastic heart are more prevalent in males . After surgical treatment, the volume index and ventricular masses are larger in males, as in the normal healthy population. Right ventricular hypertrophy and dilatation correlate with loading conditions in a similar way for both sexes. However, under comparable loading conditions, males show more severe functional impairment . Although the clinical history of infants with CHD is related to gender, there is no significant prevalence for either sex. A higher mortality rate has been reported in older males with CHD, while sudden cardiac death is more prevalent in young males. However, mortality for CHD after surgery is higher among girls compared to boys, probably due to their smaller body size. Women are at higher risk of developing pulmonary arterial hypertension but at lower risk of adverse aortic outcomes, even though the possibility of them undergoing aortic surgery remains minimal. Moreover, females have a lower risk of infective endocarditis . Observations from clinical research in humans have suggested a difference in brain and neuronal physiology based on sex differences that begin in the fetal and newborn period, and extend throughout the human lifespan into adulthood. In premature infants, girls have significantly lower cerebral blood flow (CBF) than boys of similar gestational and postnatal age ; however, adult females have higher CBF than males. The mechanism regulating these differences are not well-understood, but the relative immaturity of CBF auto-regulation in premature infants may be the reason why females with relatively lower cerebral blood flow have a lesser incidence of germinal matrix or intraventricular haemorrhage. Pain sensitivity is another issue significantly connected with gender. Newborns and preterm male tolerate fewer painful stimuli , although there is a difference in which side of the body is involved . This is probably due to bilateral somatosensory cortical activation, which is less evident in females and persists until adolescence . Conversely, a study conducted on an ex-preterm cohort in adulthood showed a lower capacity to modulate pain in females with a consequent increased risk of developing persistent pathological pain, although the reason for this is still unclear . Differences between sexes exist in cellular and molecular development , which affect both normal neuronal function and the effectiveness of various therapies in cases of brain damage. However, the correlation with the behavioural and psychological aspects is still a matter of discussion . Sexual dimorphism of the fetus manifests during pregnancy. Intrauterine and postnatal growth nomograms are sex-specific. There is increasing evidence showing that from fetal life, boys and girls have different responses to maternal nutrition, and that maternal breastmilk composition differs based on fetal sex . Furthermore, early neonatal nutritional interventions affect boys and girls differently, and early nutrition has sex-specific effects on both body composition and neurodevelopmental outcomes . However, no studies have investigated whether nutritional requirements differ between the sexes. Thus, the current nutrition guidelines for preterm infants are unisex and could be sub-optimal. More information is needed to determine sex differences in infants’ macronutrient requirements, such as whether preterm females require higher fat intake and preterm males require higher protein intake for optimal growth and neurodevelopmental outcomes . Pharmacological treatments have varying efficacy and side effects depending on a patient’s sex, especially in preterm population . Unfortunately, scientific literature seldom covers gender differences in infant pharmacology, whether in randomized controlled studies or meta-analyses. The pharmacological inhibition of prostaglandin synthesis has been shown to promote the stability of germinal matrix vessels and prevent intraventricular haemorrhage (IVH) in preterm rabbit pups. A similar effect has been reported in humans. Two large North American trials investigated the early use of intravenous indomethacin in preterm infants at high risk for IVH, and the results showed a significant reduction in severe intraventricular haemorrhage in only the male papulation . On the other hand, less positive long-term cognitive outcomes and a higher mortality rate were observed in female infants . Therefore, this prophylaxis appears to be as beneficial for males as potentially harmful for females. Conversely, hydrocortisone for BPD prophylaxis is more effective in females, increasing bronchopulmonary dysplasia-free survival rate . Experimental studies carried out by administering caffeine (an adenosine receptor antagonist) to rats have shown several positive effects on respiratory pattern, such as an increase in respiratory frequency in the early phase of response to hypoxia and in tidal volume in the late phase of response. This effect has been observed exclusively in male rats , probably due to the long-term effects on the nucleoside receptor system. In addition, the increased expression of the Adenosine (2 A) receptor, which is specific to male rats, may have affected adenosine-dopamine interactions that regulate chemosensory activity. Therapeutic hypothermia is a widely used procedure to protect neonates from hypoxic–ischaemic brain injury , which was found to be more effective in the female population, particularly in medium and long-term outcomes . For the same purpose, experimental treatment with the infusion of stem cells did not show differences between genders . The aim of gender medicine is to improve care by considering patient’s sex as a variable responsible for the onset and evolution of many diseases. Some differences are also reported among neonates, suggesting the need to consider sex variables in diagnostic and therapeutic pathways. Overall, while the male population seems to be more affected by diseases and related complications in the first months of life, an inversion of trend has been noted during growth for some conditions (Table ). In addition to genetic and physiological aspects, social, demographic, and behavioural factors may also play an important role in this tendency. Based on the findings of this paper, we believe that a systematic review, including more sources and a larger period, could better clarify the role of gender in neonatology. Therefore we believe it is necessary to carefully consider the sex variable in both scientific research and clinical practice to have the most appropriate approach towards patients and to apply the most suitable care, therapies, and prophylaxis. |
Identification of core objectives for teaching sustainable healthcare education | 0bb8c94b-4947-4e6f-a780-f7df35771020 | 5653939 | Preventive Medicine[mh] | Climate change and ecosystems degradation present the ‘greatest threat’ to public health in this century . Physicians will be called upon to care for patients who bear the burden of disease from the impact of climate change and ecologically irresponsible practices which harm ecosystems and contribute to climate change. Many diseases and health burdens are linked to climate fluctuations including respiratory illness, infectious diseases, and malnutrition . Furthermore, physicians work within the wasteful, high eco-footprint healthcare system, which has barely begun to embrace a culture of sustainability . Physicians are in a position to view sustainability from multiple angles, to move the health and healthcare culture toward greater ecological responsibility and, as a consequence, improve patient and public health. The latter position reflects the physician’s identity as one of an advocate to ‘promote those social, economic, educational, and political changes that ameliorate the suffering and threats to human health’ . However, physicians must first recognize the connection between the climate, ecosystems, sustainability, and health and their responsibility and capacity as health professionals in changing the status quo . Described by the Sustainable Healthcare Education Network, ‘sustainable healthcare education’ (SHE) is education about the impact of climate change, ecosystem alteration, and biodiversity loss on health as well as the impact of the healthcare industry on the aforementioned . Nomenclature related to SHE has included ‘environmental sustainability’ , ‘ecosystems and health’ , ‘ecosystem health’ , ‘climate change environment degradation, biodiversity and health’ , and ‘environmental accountability’ . There is currently little SHE in medical education curricula . The health impacts of environmental change will be experienced by all of society albeit unequally, with those least responsible for the change (e.g., children, the world’s poor) affected the most . Socially accountable education emphasizes the use of education, research, and service to address health concerns through approaches that engage interdisciplinary professionals, organizations, and the public . A SHE curriculum developed from within the framework of social accountability provides a critical scaffold for students and teachers to understand the importance of what is learned to the healthcare needs of the patients they serve. Moreover, a SHE curriculum resonates with aims for bettering the healthcare system by (1) improving population health through proactive anticipation of society’s healthcare needs and attention to prevention, and (2) reducing healthcare costs by focusing on the sustainability and resource efficiency (including containment of waste and cost) in the healthcare system . Currently few medical schools offer electives, some student-run, that focus on the impact of climate change on health and/or creating sustainable healthcare practices . A recent review found that medical students and physicians know about ecosystems but need more education on causes and consequences of environmental change . The Sustainable Healthcare Education Network developed a representative set of learning objectives to guide both undergraduate and graduate medical education in SHE, grouped into three priority learning areas: Describe how the environment and human health interact at different levels. Demonstrate the knowledge and skills needed to improve the environmental sustainability of health systems. Discuss how the duty of a doctor to protect and promote health is shaped by the dependence of human health on the local and global environment. Little is known about which SHE objectives are core and when in the continuum of medical education core objectives should be introduced. Moreover, as knowledge proliferates, the demands increase for what learners should know to become physicians . Today the undergraduate and graduate medical education curriculum is crowded with content learners must know. Systematically developed SHE objectives are needed to guide medical educators to prioritize what they teach across the continuum of the crowded medical education curriculum. Ultimately, a SHE curriculum will provide physicians with the necessary awareness, knowledge, and skills to care for patients who experience the impact of climate and environment on health and advocate for the sustainability of the health systems in which they work. The aim of our study was to provide guidance on which and when a set of SHE objectives should be included in the continuum of medical education. Design and setting We used a modified Delphi approach to conduct a two-step survey of SHE experts between June and October 2015. The University of California, San Francisco (UCSF) institutional review board approved the study as exempt. Participants We surveyed physicians and academics who had expertise or engaged in one or more of the following activities around the topics of climate change, environmental literacy, environmental and/or ecosystem health, or healthcare sustainability : (1) research, (2) writing/publishing, (3) teaching, (4) activism, or (5) administration. Respondents consisted of experts identified through (1) a literature search, (2) web search for individuals working in relevant organizations and the community such as the state departments for public health, Physicians for Social Responsibility, Health Care without Harm (to ensure we held the principles of social accountability described earlier), and (3) snowball sampling. Snowball sampling techniques, in which respondents are asked to solicit other experts, ensured that as many expert perspectives as possible were represented . Objectives and survey Our survey consisted of a set of SHE objectives created in two distinct phases. During the first phase (2009), an education sub-committee, representing the professions at UCSF (Medicine, Dentistry, Nursing, and Pharmacy) and interest in SHE was charged by the UCSF Academic Senate Sustainability Committee. Based on a literature and web search, the sub-committee created a comprehensive set of SHE learning objectives for UCSF health professions learners. One committee member (AT) extracted articles with no start date through 2009 via PubMed and CINAHL in English using the search terms ‘ecosystems’, ‘climate change’, ‘environment’, ‘sustainability’, ‘environmental sustainability’, ‘health’, and ‘education’ with Boolean operators. Reference lists of identified manuscripts located additional articles. Articles were included regardless of type. The committee member used the same terms to search for institutions whose focus was on SHE education. Another committee member (TN) drafted objectives. All committee members reviewed and revised the objectives. This process resulted in 30 SHE objectives. In the second phase (2015), the investigators consulted the priority learning outcomes developed by the Sustainable Healthcare Education Network , a collaboration of academics, physicians, and healthcare students. The Network created objectives through a structured feedback process from all medical schools, Royal colleges, post-graduate deaneries, and major medical organizations in the United Kingdom . These learning objectives were a representative set meant to guide undergraduate and graduate medical education. Our aim was to be comprehensive. Hence, where Sustainable Healthcare Education network and UCSF objectives aligned, we used the former’s 13 objectives. We then augmented the list by including additional eight UCSF objectives. The initial mapping of the objectives was completed by one author (AT). Subsequently three investigators (LTA, TN, SR) reviewed mapping of the two sets of objectives and organization by the Sustainable Healthcare Education Network’s priority areas (see Introduction). Discrepancies were discussed and consensus reached on the final objectives and their placement. The resulting 21 objectives were used in the survey. The survey asked respondents to provide demographic information and description of their SHE expertise. Respondents independently rated the importance of each objective using a 1 to 4 scale (1 = not very important, do not include; 2 = moderately important; 3 = important; 4 = very important); and when in training or the continuum of medical education (1 = premedical school; 2 = preclinical years of medical school; 3 = clinical years of medical school; and 4 = postgraduate years (e.g., residency, fellowship)) should a learner be taught this objective. To provide context (i.e., experiences with and perception of the objectives) to ratings, at the end of the survey we asked respondents to describe via open-ended questions if and how their institution addressed the objectives. We distributed the surveys via Qualtrics TM accompanied by an information sheet describing the modified Delphi procedure. Modified Delphi procedure One of the steps in curriculum development involves establishing content validity of content . This step determines whether the content measures the construct (i.e., core SHE knowledge) for the intended population (i.e., learners in the medical education continuum). To conduct a content validation of the objectives we conducted a modified-Delphi procedure. The Delphi technique is typically used to gather a reliable opinion from a group of experts via sequential surveys, including quantitative feedback on prior responses . In a typical modified-Delphi study experts complete a first round of ratings and are asked to complete a second round in which they are given the round 1 ratings of all the experts to allow them to reconsider their responses informed by information received from other experts . Respondents rated the importance and level of education for the objectives in round 1. In round 2, all original respondents including those identified during the snowball sample were re-surveyed. In this re-survey respondents were provided with their own individual round 1 ratings and all respondents’ distribution of ratings to inform their round 2 responses. For the snowball sample, 19 (47.5%) of the 40 experts (see Results section for response rate details) who responded to the first round recommended additional experts. Thirteen (68.5%) of the 19 respondents recommended between one and five experts and six (31.5%) recommended between six and 13 experts. The list of recommended experts was subsequently examined to determine which ones were not already surveyed or recommended by multiple experts. Analysis Respondents’ demographics were displayed using descriptive statistics. For each objective, we calculated an item-level content validity index (CVI) from the second round of ratings . A content validity index is used to quantify the relevancy of objectives and provides information about the proportion of respondents in agreement with relevance of each objective . For adequate content validity, a CVI of .78 or greater has been recommended in the literature . We considered objectives as having sufficient content validity if 78% or more (CVI = .78 or greater) of the respondents rated them as a 3 (important) or 4 (very important) . To determine whether the Delphi method impacted the ratings between rounds, we examined whether the mean variance changed between the two rounds. Level of education was analyzed using descriptive statistics. Open-ended questions were analyzed by one investigator (AT) using the qualitative content analysis . The investigator first read through all responses to the open-ended questions and through an open coding process generated an initial list of codes. The investigator then applied the list of codes to all the responses. The investigator discussed coding uncertainties with an additional investigator (SR) and finalized coding through discussion. We used a modified Delphi approach to conduct a two-step survey of SHE experts between June and October 2015. The University of California, San Francisco (UCSF) institutional review board approved the study as exempt. We surveyed physicians and academics who had expertise or engaged in one or more of the following activities around the topics of climate change, environmental literacy, environmental and/or ecosystem health, or healthcare sustainability : (1) research, (2) writing/publishing, (3) teaching, (4) activism, or (5) administration. Respondents consisted of experts identified through (1) a literature search, (2) web search for individuals working in relevant organizations and the community such as the state departments for public health, Physicians for Social Responsibility, Health Care without Harm (to ensure we held the principles of social accountability described earlier), and (3) snowball sampling. Snowball sampling techniques, in which respondents are asked to solicit other experts, ensured that as many expert perspectives as possible were represented . Our survey consisted of a set of SHE objectives created in two distinct phases. During the first phase (2009), an education sub-committee, representing the professions at UCSF (Medicine, Dentistry, Nursing, and Pharmacy) and interest in SHE was charged by the UCSF Academic Senate Sustainability Committee. Based on a literature and web search, the sub-committee created a comprehensive set of SHE learning objectives for UCSF health professions learners. One committee member (AT) extracted articles with no start date through 2009 via PubMed and CINAHL in English using the search terms ‘ecosystems’, ‘climate change’, ‘environment’, ‘sustainability’, ‘environmental sustainability’, ‘health’, and ‘education’ with Boolean operators. Reference lists of identified manuscripts located additional articles. Articles were included regardless of type. The committee member used the same terms to search for institutions whose focus was on SHE education. Another committee member (TN) drafted objectives. All committee members reviewed and revised the objectives. This process resulted in 30 SHE objectives. In the second phase (2015), the investigators consulted the priority learning outcomes developed by the Sustainable Healthcare Education Network , a collaboration of academics, physicians, and healthcare students. The Network created objectives through a structured feedback process from all medical schools, Royal colleges, post-graduate deaneries, and major medical organizations in the United Kingdom . These learning objectives were a representative set meant to guide undergraduate and graduate medical education. Our aim was to be comprehensive. Hence, where Sustainable Healthcare Education network and UCSF objectives aligned, we used the former’s 13 objectives. We then augmented the list by including additional eight UCSF objectives. The initial mapping of the objectives was completed by one author (AT). Subsequently three investigators (LTA, TN, SR) reviewed mapping of the two sets of objectives and organization by the Sustainable Healthcare Education Network’s priority areas (see Introduction). Discrepancies were discussed and consensus reached on the final objectives and their placement. The resulting 21 objectives were used in the survey. The survey asked respondents to provide demographic information and description of their SHE expertise. Respondents independently rated the importance of each objective using a 1 to 4 scale (1 = not very important, do not include; 2 = moderately important; 3 = important; 4 = very important); and when in training or the continuum of medical education (1 = premedical school; 2 = preclinical years of medical school; 3 = clinical years of medical school; and 4 = postgraduate years (e.g., residency, fellowship)) should a learner be taught this objective. To provide context (i.e., experiences with and perception of the objectives) to ratings, at the end of the survey we asked respondents to describe via open-ended questions if and how their institution addressed the objectives. We distributed the surveys via Qualtrics TM accompanied by an information sheet describing the modified Delphi procedure. One of the steps in curriculum development involves establishing content validity of content . This step determines whether the content measures the construct (i.e., core SHE knowledge) for the intended population (i.e., learners in the medical education continuum). To conduct a content validation of the objectives we conducted a modified-Delphi procedure. The Delphi technique is typically used to gather a reliable opinion from a group of experts via sequential surveys, including quantitative feedback on prior responses . In a typical modified-Delphi study experts complete a first round of ratings and are asked to complete a second round in which they are given the round 1 ratings of all the experts to allow them to reconsider their responses informed by information received from other experts . Respondents rated the importance and level of education for the objectives in round 1. In round 2, all original respondents including those identified during the snowball sample were re-surveyed. In this re-survey respondents were provided with their own individual round 1 ratings and all respondents’ distribution of ratings to inform their round 2 responses. For the snowball sample, 19 (47.5%) of the 40 experts (see Results section for response rate details) who responded to the first round recommended additional experts. Thirteen (68.5%) of the 19 respondents recommended between one and five experts and six (31.5%) recommended between six and 13 experts. The list of recommended experts was subsequently examined to determine which ones were not already surveyed or recommended by multiple experts. Respondents’ demographics were displayed using descriptive statistics. For each objective, we calculated an item-level content validity index (CVI) from the second round of ratings . A content validity index is used to quantify the relevancy of objectives and provides information about the proportion of respondents in agreement with relevance of each objective . For adequate content validity, a CVI of .78 or greater has been recommended in the literature . We considered objectives as having sufficient content validity if 78% or more (CVI = .78 or greater) of the respondents rated them as a 3 (important) or 4 (very important) . To determine whether the Delphi method impacted the ratings between rounds, we examined whether the mean variance changed between the two rounds. Level of education was analyzed using descriptive statistics. Open-ended questions were analyzed by one investigator (AT) using the qualitative content analysis . The investigator first read through all responses to the open-ended questions and through an open coding process generated an initial list of codes. The investigator then applied the list of codes to all the responses. The investigator discussed coding uncertainties with an additional investigator (SR) and finalized coding through discussion. We sent the survey to 50 experts of whom 40 (80%) responded, and subsequently to 32 experts identified through the snowball sample of whom 12 (37.5%) responded. In total, 52 of 82 (63.4%) experts completed the surveys in both rounds. displays participants’ demographic data. Most respondents were from the United States, physicians, and affiliated with a public university. Respondents ascribed their expertise to multiple areas of which research was most prominent. shows the mean ratings, CVI, and modes for training time period in training for the proposed objectives. Fifteen of the objectives achieved a CVI of 78% or greater. Of these fifteen, three objectives received CVIs of 90% or greater. The objectives with CVI of 78% or greater were part of all priority areas and included every objective in area 1 (how the environment and human health interact at different levels). Six objectives had CVIs between 58% and 77%. Of these six objectives, 3 received CVIs less than 70%. The average variance for round 1 ratings were .67 which remained stable through the second round at .68, indicating that participants didn’t change their ratings much between rounds. The preclinical years of medical school were rated as the appropriate time for introducing 13 of the objectives, and the clinical years were rated as the optimal time for introducing six of the objectives. A majority of the respondents felt that learners should learn the definition of environmental sustainability prior to medical school and identify ways to improve the environmental sustainability of health systems in post graduate training. On the open-ended questions, nineteen respondents stated that their institution or workplace explicitly addressed some SHE objectives. These objectives were addressed primarily in the preclinical medical school curriculum with some institutions covering the objectives during elective courses or post-graduate training. The objectives covered by respondents’ institutions included those pertaining to environmental health and sustainability of the workplace (e.g., general recycling procedures), research practice (e.g., sustainable laboratory practices), or the environmental impact of the healthcare system (e.g., waste production post patient care). Respondents stated that the objectives should be taught throughout medical education in an iterative format and noted that these objectives should be included in testing (e.g., standardized tests) to integrate and reinforce the importance of SHE education. As human impact exerts pressure on the planet’s resources, the health of both ecosystems and humans are threatened. It is critical for learners to be conscious of, educated about, and responsive to this impact. Accordingly, we determined which SHE objectives should be taught when in the continuum of medical education. Overall our respondents indicated considerable agreement around which SHE objectives were important. Most objectives were considered important, with the objectives on the interaction between the environment and human health viewed as vital. Respondents noted that most, but not all, of the objectives should be covered primarily during the preclinical and clinical years of medical school. Based on the modified Delphi ratings, displays the core SHE objectives and when each objective can be introduced during the continuum of medical education. This table serves as a guide for schools considering creating a SHE curriculum. Most of the objectives on the survey were primarily knowledge-focused. Five of the objectives focused partially or completely on skills or attitudes related to SHE (i.e., take a focused occupational and environmental history, diagnose and prevent adverse health effects, identify patients most vulnerable to climate change, evaluate work for level of sustainability, recognize and articulate personal values). Of these five, consensus for inclusion was attained for two (i.e., take a focused occupational and environmental history and evaluate work for level of sustainability). The lower consensus for the skill/attitude objectives may have been because of the preponderance of knowledge objectives. However, SHE is in the early stages of discourse on development and inclusion. Hence it is likely that the prioritizing of knowledge-based objectives reflects the most immediate need to address the basics in lack of knowledge. The latter perspective is corroborated in recent work , in which Walpole and colleagues note that health professionals have basic SHE awareness but lack knowledge of its many aspects. Moreover, they also note that the attention given to SHE in medical education has been sparse. We found that although SHE was covered at a few of the respondents’ institutions, the primary focus was on the sustainability-of-practice aspect of SHE (i.e., sustainability in the workplace, research and provision of healthcare) and was not always part of core education. These findings speak to the larger challenge of the crowded medical school curriculum faced by those of us considering the inclusion of SHE and those who seek to teach topics critical to evolving societal healthcare needs such as nutrition, violence prevention, and structural competency. Our study accounted for part of this challenge by prioritizing which objectives should be taught when. Recent discussion is beginning to address the next step in SHE curriculum development which points to a broad range of pedagogical approaches that may be used in SHE such as case-based, didactic, e-learning, and skills-based methods . Ultimately, these suggestions do not provide a thorough solution to the problem of the overcrowded curriculum but serve to mitigate some of the overcrowding. Future research should explore how institutions have chosen to implement objectives, instructional methods selected, and lessons learned. In addition, it will become essential to explore how institutions have secured support from the leadership or have leveraged existing structures to include SHE content in the curriculum. Limitations of our study were that most of our respondents were from the United States. Our snowball sample was identified by less than half of the respondents in round 1, potentially skewing the perspectives offered. We limited our respondents to one option when rating the ‘timing in training’ question for each objective. We were seeking optimal time in training for each objective; however, allowing one option may have limited respondents from recommending all applicable time periods for each objective. Physicians, in their role as advocates, are accountable to society to improve the health of patients and communities . This advocacy includes environmental accountability defined as the ‘obligation (of medical schools) within the social accountability framework to ensure their education, research, and service activities help to actively develop, promote, and protect environmentally sustainable solutions to address the health concerns of the community, region, and the nation that they have a mandate to serve’ . A SHE curriculum places at the nexus of what physicians need to know the impact of the climate and environment on health as well as the impact of the healthcare system on the environment. Ultimately SHE education is vital as climate change and environmentally unsustainable practices pose perils to human health and existence. Increased knowledge means environmentally sustainable practices are learned and further environment-related deterioration of the health of society and planet, prevented. |
Primary antibiotic prophylaxis in biliary atresia did not demonstrate decreased infection rate: Multi‐centre retrospective study | 39caff44-6a4b-4445-a802-5448d075fdbf | 11828718 | Surgical Procedures, Operative[mh] | INTRODUCTION Biliary atresia (BA) is a rare disease of the neonatal period, which is characterised by obstructive cholangiopathy, more prominent in the extra‐hepatic biliary tree. BA occurs worldwide, with variable incidence, from 1:18000 live births in Europe to 1:6600 in Asia. Data are not available of the incidence of BA in Israel in the last decades. Without treatment, cholestasis inevitably progresses, leading to fibrosis and liver failure during the first months of life, followed by death during early childhood. Treatment of BA includes Kasai portoenterostomy (KPE) surgery at the time of diagnosis, and liver transplantation (LT) if the procedure fails or complications appear. KPE outcomes are better when performed at a younger age. , , Liver fibrosis progresses in patients with BA, both after a failed and successful KPE, albeit at a slower rate. Complications such as ascending cholangitis and portal hypertension leading to variceal bleeding and ascites frequently follow KPE operations. Current management post‐KPE varies between centres, mainly due to the low incidence of the disease and the lack of strong evidence to support one practice over another. One critical question regarding the management of complications remains unanswered. The utility of prophylactic antibiotics in preventing ascending cholangitis has been highly debated due to conflicting results. While one study found no significant difference in cholangitis incidence with longer durations of prophylactic therapy, another reported that 79% of patients developed ascending cholangitis despite routine antibiotic prophylaxis. Although prophylactic antibiotics are common practice in many centres, their utility remains unclear. Our study included patients from four centres in Israel, who were treated according to various clinical practices. The main aim of this study was to investigate the efficacy of primary antibiotic prophylaxis in preventing cholangitis. METHODS This is a retrospective multi‐centre study of data recorded during 2008–2018. Medical records were reviewed of all the children with BA or suspected BA who were treated in one of four treatment centres in Israel during the study period. These four centres are estimated to treat 75% of the patients with BA in Israel. BA was diagnosed by clinical and biochemical data, surgical reports (including positive cholangiogram) and liver histology. Patients with a genetic cause for cholestasis were excluded from the study. Medical, surgical and pathological data were reviewed. The recorded data included: ethnicity, date of birth, the age at the time of KPE, the treatment centre, the postoperative course and complications. The latter included biliary or chyle drainage, postoperative peritonitis, surgical wound infection, the use of primary antibiotic prophylaxis (which was determined based on the individual physician's practice) and laboratory data before KPE and during follow‐up. Data during the follow‐up included bilirubin level, nutritional support, complications including cholangitis (as determined based on the diagnosis noted in the medical discharge papers), ascites and variceal bleeding, and treatment and outcome data. Treatment data included variceal upper endoscopy screening and findings, whether LT was performed, the date of LT and the type of liver graft. The outcomes included postoperative complications and death of any cause. Patients were followed until LT or until the end of the study (December 2018). KPE success was defined by a serum bilirubin level of ≤2 mg/dL at 3 months post‐KPE. , 2.1 Statistical analysis Categorical variables were summarised as frequencies and percentages. Continuous variables were evaluated for normal distributions using histograms and the Kolmogorov–Smirnov test. Continuous variables that were normally distributed were reported as means and standard deviations (SD), while other variables were reported as medians and interquartile ranges (IQR). Associations of categorical predictors with the occurrences of bleeding, cholangitis and liver transplantation were examined using Kaplan–Meier curves and the log‐rank test. The association with continuous variables was observed using Cox regression. The association between categorical variables and KPE success at 3 months was evaluated using the Chi‐square test or Fischer's exact test. Associations with continuous variables were evaluated using the independent samples t‐test or Mann–Whitney test. All the statistical tests were two sided and p < 0.05 was considered as statistically significant. SPSS software was used for all the statistical analyses (IBM SPSS statistics for windows, version 25, IBM corporation, Armonk, NY, USA, 2017). Statistical analysis Categorical variables were summarised as frequencies and percentages. Continuous variables were evaluated for normal distributions using histograms and the Kolmogorov–Smirnov test. Continuous variables that were normally distributed were reported as means and standard deviations (SD), while other variables were reported as medians and interquartile ranges (IQR). Associations of categorical predictors with the occurrences of bleeding, cholangitis and liver transplantation were examined using Kaplan–Meier curves and the log‐rank test. The association with continuous variables was observed using Cox regression. The association between categorical variables and KPE success at 3 months was evaluated using the Chi‐square test or Fischer's exact test. Associations with continuous variables were evaluated using the independent samples t‐test or Mann–Whitney test. All the statistical tests were two sided and p < 0.05 was considered as statistically significant. SPSS software was used for all the statistical analyses (IBM SPSS statistics for windows, version 25, IBM corporation, Armonk, NY, USA, 2017). RESULTS Seventy‐two patients were diagnosed with BA during the study period. Thirty‐nine (54%) were male and 33 (46%) females. The majority of patients (68%) were Jewish, followed by 29% Muslims and 3% of other or unknown origin. This is consistent with the demographic distribution of the Israeli population. Our four medical centres accounted for about 75% of all the patients with BA in Israel. We had 72 cases in 10 years; thus, there were about 96 cases of BA in 10 years. With an average of 150 000 live birth per year, the estimated incidence of BA in Israel was 1 in 15 000 live births. The median age of BA diagnosis was 51 days (IQR: 32.5–60). Two patients required a second liver biopsy for a definite diagnosis. The median age of KPE was 58.5 days (IQR: 47–71), (Table ). At 3 months post‐KPE, 23 patients (32%) had a successful KPE, as defined above. Among the patients with a successful compared to a failed KPE, the mean age at diagnosis and mean age of KPE were lower: 42 versus 53 days, p = 0.023 and 54 versus 63 days, p = 0.04, respectively. The success rate of KPE was 47% among females and 22% among males ( p = 0.032). Of the 72 patients who underwent KPE, 62 (87%) received perioperative antibiotic treatment; the median duration was 4 days (range: 2–15 days). The most commonly used antibiotic was piperacillin‐tazobactam, which was administered to 33 patients (51%). Gentamicin was the next most common, administered to 21%, then clindamycin (13%) and ampicillin (13%). Twenty‐six patients (39%) received a combination of more than one antibiotic. Sixty‐three patients (87%) had an uneventful perioperative course. Perioperative wound infection occurred in three (4%) patients. Chyle drainage occurred in one patient and postoperative peritonitis occurred in five patients (7%). Biliary secretion occurred in one patient which required re‐anastomosis. For most of the patients, the drain was removed routinely a few days after the procedure, once there was less than 100 mL of fluids per day. Forty‐nine (68%) of our patients had ascending cholangitis during the study period. The median time of the first episode of ascending cholangitis post‐KPE was 93 days (SD = 26.8 days, 95% confidence interval [CI] 40.4–145.5). For the patients who experienced ascending cholangitis, the first episode occurred during the first year of life, except for two patients (4%) whose first episodes occurred at the age of 2 years. Thirty‐five (71%) of the patients with ascending cholangitis had more than one episode (up to eight episodes occurred per patient). There was no significant difference in SNL between patients who did not experience cholangitis and those who had a single cholangitis episode ( p = 0.858). However, patients who had two or more cholangitis episodes, compared to none, had a hazard ratio of 0.383 (95% CI: 0.162–0.908, p = 0.029) for liver transplantation, indicating a 62% reduced risk for liver transplantation. The choice of antibiotic regimen for cholangitis treatment varied depending on the specific protocols of each centre. The antibiotics used to treat cholangitis episodes were as follows: piperacillin‐tazobactam (67 out of 128 cholangitis episodes, 52%), metronidazole (34/128, 26%), ceftriaxone (30/128, 23%), ampicillin (17/128, 13%), ciprofloxacin (14/128, 11%), gentamicin (13/128, 10%), cefotaxime (12/128, 9%), meropenem (11/128, 9%), amoxicillin‐clavulanic acid (3/128, 2%), amikacin (6/128, 5%), ceftazidime (5/128 4%), ertapenem (5/128, 4%), vancomycin (3/128, 2%), cefazolin (2/128, 1%) and trimethoprim‐ sulfamethoxazole (1/128, 1%). Of note, some patients were treated with a combination antibiotics regimen during episodes of cholangitis. Bacteraemia occurred in 12 (24.5%) patients during ascending cholangitis episodes. Klebsiella pneumoniae was responsible for 42% of the positive cultures. Among the patients with successful and failed KPE, the incidence of ascending cholangitis was similar: 6/23 (26%) and 15/45 (33%), p = 0.607, respectively. The median times to the first ascending cholangitis event were also similar: median 45 and 77 months, respectively, p = 0.607 (Figure ). Onset of the first ascending cholangitis episode was later among patients who received perioperative treatment with piperacillin‐tazobactam than among those who received another antibiotic or no antibiotic treatment (after a median 166 vs 88 days, p = 0.02). Perioperative ceftriaxone treatment compared to treatment with another or no perioperative antibiotic resulted in earlier onset of the first ascending cholangitis episode (after a median 18 vs 127 days, p = 0.015). Primary antibiotic prophylaxis was given to 35 (49%) patients; trimethoprim‐sulfamethoxazole was the most commonly used antibiotic (21 patients, 29%), followed by cephalexin (10 patients, 14%). Among those treated with prophylactic antibiotics compared to those not treated, the first ascending cholangitis episode occurred earlier (after a median 77 vs 239 days, p = 0.016). Of the 49 patients diagnosed with ascending cholangitis, 28 (57%) received primary antibiotic prophylaxis, while 21 (43%) did not. Of the latter group, five (24%) were prescribed secondary prophylactic therapy. Ciprofloxacin was the most frequently prescribed antibiotic for secondary prophylaxis, administered to three of 12 patients (25%) who received secondary prophylaxis. Thirty patients (42%) developed ascites during the study follow‐up period. Seven patients (10%) presented with variceal bleed. The median time to variceal bleed was 417 days (IQR: 242–881); three of seven patients (42%) had ascites prior to the bleed. Platelet counts of the seven patients were >100 K/micl. Two of them underwent liver transplantation during the study period, at the age of 8.8 months (22 days after the bleeding event) and 15.7 months (153 days after the first bleeding event). A total of 28 patients (39%) underwent LT during the study follow‐up. Survival with native liver (SNL) was 54% at 5 years (Figure ). Three patients died, two of them while on the waiting list for LT and one due to his congenital heart disease and heart failure at the age of 8 months. Fifteen (58%) of the transplanted patients had living‐related donors and 11 (42%) had undergone cadaveric liver transplantation. DISCUSSION Data regarding managing patients with BA post‐KPE are sparse and are mostly based on personal experience and expert opinion. Our study suggests that the rate of ascending cholangitis episodes was not lower among patients who received primary antibiotic prophylaxis than those who did not receive antibiotics prophylaxis. KPE was successful for 23 (32%) of our patients. This is lower than previously reported, in the range of 40%–76%. , The lower rate is likely due to the relatively advanced age at the time of KPE, at a median of 58.5 days (IQR: 47–71). This surprisingly advanced age of KPE does not reflect the four centres awareness for direct hyperbilirubinaemia, and is likely due to late referrals from HMOs or from peripheral hospitals to tertiary hospitals. In our cohort, earlier age at diagnosis and surgery were associated with successful KPE, similar to previous reports. , , Further efforts should be invested in a national and international screening programmes to promote earlier diagnosis which would result in better outcome for these patients. Though there are a few papers describing surgical techniques of anti‐reflux valve or wider anastomosis, future studies are required to determine their utility. It is important to note that we did not collect data on the length of the Roux‐en‐Y loop as it was not written in all of the surgical reports; however, the practice mostly used in the four centres is a 50 cm loop. In our cohort, SNL was 54% at 5 years, similar to 5‐year rates that were previously reported: 41% in an Italian series, 41% in a French series, 58% in a Chinese series and 46% in a series conducted in England and Wales. Two main prognostic factors that affect SNL rates are KPE success, and the age at the KPE procedure. Our patients with successful KPE were not transplanted during the follow‐up period ( p = 0.001). While recurrent cholangitis was associated with increased native liver survival, this was oddly not associated with KPE success, perhaps related to study size. It is possible that patients who achieved better bile drainage post KPE, had patent bile ducts making them susceptible to recurrent cholangitis infections. Ascending cholangitis is a common and serious complication after KPE, which occurs due to an ascending infection via intestinal Roux loop. The use of prophylactic antibiotic treatment is controversial. Though no data support its use, many centres routinely prescribe antibiotics for a period of many months and even a year. Moreover, an expert panel recently recommended its use. Notably, prolonged antibiotics use is not without risks. The use of antibiotics in children can lead to several disadvantages, including the disruption of healthy gut bacteria, which may affect digestion and immunity. Overuse or misuse of antibiotics can also contribute to the development of antibiotic‐resistant bacteria, making future infections harder to treat. Additionally, some children may experience allergic reactions or side effects such as diarrhoea or nausea. A recent study demonstrated higher rates of multi‐drug‐resistant organisms after LT in patients with a primary diagnosis of BA. The incidence of ascending cholangitis was not lower among our patients who received prophylactic antibiotic therapy than among those who were not treated. Interestingly, the first episode of ascending cholangitis occurred earlier among those who were treated with antibiotic prophylaxis, though this finding could be spurious due to the relatively small number of patients. A systematic review of the utility of prophylaxis antibiotics identified only four relevant studies with contradictory results. A later Italian study reported no evidence for the prevention of ascending cholangitis in infants with BA who received antibiotic prophylaxis. Another retrospective study reported significantly more episodes of ascending cholangitis among patients who received antibiotic prophylaxis than among those who were not treated. Multi‐centre prospective studies are needed to determine antibiotic utility in this context. This study has several inherent limitations. First, due to its retrospective nature, the definition of cholangitis was determined based only on the diagnosis noted in the medical discharge papers and not on predetermined strict study criteria. , Additionally, while we reviewed all surgical reports from our patients to ensure that intra‐operative cholangiogram was performed or attempted in order to confirm the diagnosis of BA, we did not collect data regarding anatomical location of the obstruction, which is considered a prognostic factor of KPE success. , Furthermore, we did not gather data on the length of the Roux‐en‐Y loop, as this detail was not consistently recorded in all surgical reports. However, the typical practice across the four centres involved in the study is the use of a 50 cm loop. It is also worth noting that the majority of patients in this study (54 out of 72, 75%) were treated at two major referral centres that do not administer steroids post‐KPE. Due to the small number of patients treated at centres that do use steroids, we did not analyse this variable, as the lack of statistical power would have limited meaningful conclusions. As such, our results do not contribute to the ongoing debate on whether steroids improve jaundice clearance or reduce cholangitis rates compared to non‐steroid treatment. In conclusion, this study suggests a lack of benefit of primary antibiotic prophylaxis for preventing cholangitis episodes. However, further prospective multi‐centre studies should determine its utility. Yael Brody: Investigation; writing – original draft; data curation. Mordechai Slae: Writing – review and editing. Achiya Z. Amir: Writing – review and editing. Yael Mozer‐Glassberg: Writing – review and editing. Michal Rosenfeld Bar‐Lev: Writing – review and editing. Eyal Shteyer: Writing – review and editing. Orith Waisbourd‐Zinman: Supervision; conceptualization; writing – review and editing; writing – original draft. We did not receive funding for the study. The authors have no relevant financial or non‐financial interests to disclose. This study was approved by all the participating institutional review boards in accordance with the Declaration of Helsinki. |
Editorial: Update on epidemiology, endocrinology and treatment of cryptorchidism | cd25f496-d3a3-4ede-b707-cfaabaa96d9a | 11194432 | Internal Medicine[mh] | HV: Writing – review & editing, Writing – original draft. KM: Writing – review & editing, Writing – original draft. JT: Writing – review & editing, Writing – original draft. |
A worksite intervention to reduce the cardiovascular risk: proposal of a study design easy to integrate within Italian organization of occupational health surveillance | 5943d202-3988-4389-88a2-af896a1b81e1 | 4310171 | Preventive Medicine[mh] | The worksite has been proposed by the World Health Organization (WHO) as a priority setting for health promotion in the 21 st century: The worksite directly influences the physical, mental, economic and social well-being of workers and in turn the health of their families, communities and society. It offers an ideal setting and infrastructure to support the promotion of health of a large audience . Worksite health promotion programs originated in US from executive fitness programs that were created in the years after World War II. Initiated by business leaders who endorsed the benefits of a healthful lifestyle, the number of in-house corporate programs grew steadily throughout the 1970s. During the next decades, employer benefits began to focus on management of prevalent chronic conditions (obesity, diabetes, heart disease, cancer, and depression) instead of focusing on fitness and were increasingly offered to employees of all job levels . Work health programs (WHPs) carried out in the past decade showed, in particular, promising results in contrasting the modifiable risk factors of cardiovascular diseases (CVDs) defined as: 1) physical inactivity, 2) tobacco use, 3) hypertension, 4) dyslipidemia, 5) poor diet, 6) hyperglycemia, and 7) elevated psychological stress . Several review papers have been recently issued on WHPs and cardiac rehabilitation, authored by leading authors from the United States , Canada , Brazil , Europe , India and Japan . Although the delivery models, level of development/utilization, and legislative support varied among countries, these reviews clearly indicated that worksite health and wellness are important lifestyle intervention strategies and should be viewed as integral components of global healthcare with respect to combating CVDs . Little has been done in Italy. Even the specific notion of WHP and lifestyle modification interventions is unknown within the Italian legal system, particularly in the recent set of rules for health and safety in workplaces contained in Legislative Decree 9/4/2008 No. 81 (updated in 2013) . A barrier is probably the fact that in Italy common diseases are entrusted to the general practitioner (public service), while occupational diseases are assigned to the occupational physician (private service). Employers are reluctant to support additional financial costs to improve their employees’ health, because this task falls to public health care organizations. On the other hand, employed individuals are unable to attend primary care services during the working day and may not wish to utilize their “citizen time” (time spent outside work) for this; in addition, males are less likely than their female counterparts to schedule annual health checks, seek medical advice, or attend educational meetings . Despite the substantial amount of knowledge on effectiveness of WHPs, these interventions are not systematically applied in Italy. We therefore aimed to design an educational intervention that could be easy to integrate within the Italian current organization of occupational health surveillance and reasonably priced. Study design Quasi-experimental study designs, often described as pre-post intervention studies or before-and-after studies, are common in the medical literature. We used a quasi-experimental design and precisely the “one-group pretest-posttest design” . The key outcome was reducing the cardiovascular risk over the next 10 years. The latter was computed with an algorithm − proposed by the European Society of Cardiology and acknowledged by the Istituto Superiore di Sanità (Italian National Institute of Health) , combining information on sex, age, smoking habit, diabetes, blood pressure (mmHg), and blood cholesterol level (mg/dl). The modifiable risk factors (smoking, blood pressure, cholesterol), along with physical inactivity and alcohol intake were targeted by the intervention. There are multiple primary outcomes and this is, therefore, a multi-faceted worksite intervention to promote favorable changes in cardiovascular disease risk factors. Study size was set at about 5,000 workers based on the available budget rather than power considerations. Workers were employed in a wide range of occupational sectors (private and public businesses, industry and services) and were resident in various provinces (Padova, Verona, Vicenza) of Veneto, Northeastern region of Italy. An intervention across multiple occupational groups from separate geographic communities could increase confidence that the intervention was responsible for a change in outcome. The ultimate rationale of this procedure was, therefore, assessing consistency of results. Lastly, an unstructured interview (qualitative method) was conducted by a trained occupational physician (OP) among a small group of information-rich workers to answer the questions: «How did the intervention have that effect?»; and «What was the reaction of participants to the intervention?» . Sampling frame and running the study A snowball sampling method was used to select these workers. Two authors (LM and GM) chose the occupational physicians based on their scientific interests; and OPs chose the companies where they had the best relationships with both employers and employees. An invitation letter was sent to the management of these companies, explaining the aim and methods of the study; the companies willing to cooperate constituted the final sample of 5,536 workers. Before investigation, OPs were trained on counseling techniques, mainly focused on diagnosing the worker’s motivational state to change risky behaviors. Each OP was given a fixed incentive (20 euro) for each worker examined. All investigations were performed by OPs during the normal health surveillance and took place in the worksite. A computer aided interviewing software (Microsoft Access) was set up to store the data. Information was collected on: lifestyle factors (physical activity, cigarette smoking, alcohol consumption); past medical history (particularly, occurrence of CVDs, diabetes and obesity) and whether the subject was under therapy for diabetes, hypertension and hypercholesterolemia. At physical examination, blood pressure was measured twice at 4–5 minutes distance, always right arm and worker in upright position; the lowest value was used in the statistical analysis. The laboratory evaluation was performed in the workplace itself, collecting specimens of capillary blood and measuring glycaemia and cholesterolemia with portable devices. The procedure for collecting capillary blood specimens by fingerstick was that recommended by Centers for Disease Control . The cardiovascular risk over the next 10 years (CVD risk) was computed with the above algorithm and scored in classes (<5%; 5-10%; 10-15%; 15-20%; 20-30%; >30%). After 12 months, workers were re-examined with the same protocol. The exams began after a letter of information to the supervisory body for workplace safety and health of the relevant Local Health Authority. The project was run from January 2011 up to December 2012. Clearance by Ethics Committee was not necessary because the study was a mandatory activity deliberated by Veneto Region with a formal act (Regional Decree n. 2008 3 Aug 2010). All workers signed an informed consent at enrollment. The original 5,536 workers were divided in several subsets during the course of the study. We did not consider 2,062 subjects aged less than 40 years, 65 individuals not giving the consent, 36 patients already affected by CVD, and 537 under current therapy for hypertension, hypercholesterolemia and diabetes. 2,836 subjects older than 40 years without past CVD history or current therapy for diabetes, hypertension, and hypercholesterolemia underwent laboratory evaluation. 59 workers with missing information on one or more of the six components (sex, age, smoking, diabetes, blood pressure and cholesterol) used to estimate the cardiovascular risk, and 2,326 workers with a CVD risk below 5% were excluded. The remaining 451 underwent the educational intervention. Out of the latter, 330 workers (323 males and 7 females) with a CVD risk >5% were re-examined at 1 year, while 121 (26.8% = 121/451) were lost to follow-up. All analyses were carried out in the 323 males because seven subjects could not be used to arrive at any conclusions regarding female gender. Educational intervention There were two aspects: motivation and education. The most compelling argument for changing lifestyle was the estimated risk of CVDs in the next 10 years. Then subjects received an individualized counseling based on the presence of risk factors. Physical activity was generally mistaken with “exercise” (activity that is planned, structured, repetitive, and purposeful). In agreement with World Health Organization , workers were recommended to do at least 150 minutes of moderate-intensity aerobic physical activity throughout the week (example 30 minutes 5 times/week). For diet, recommendations were to limit energy intake from total fats and shift from saturated fats to unsaturated fats, increase consumption of fruits, vegetables, legumes, whole grains and nuts, limit the intake of free sugars and limit salt consumption from all sources . Subjects with hypertension and/or hypercholesterolemia or hyperglycemia were interviewed about their attitude towards lifestyle change; whenever they could not cope to recommendations they were referred to their general practitioners for medical therapy, even when the CVD risk was lower than 20%. Likewise, most smokers with CVD risk >5% were addressed to receive an anti-smoking counseling from counselors with educational competence. In other words, we used an aggressive approach that combined both a primary and secondary prevention. Statistical analysis In order to determine whether the intervention had the intended effect we calculated proportions with the factor (such as, proportions of smokers, before and after), the ratio between proportions (point estimates and confidence intervals) and the exact McNemar significance probability (for 2 × 2 tables). For outcome variables with multiple discrete levels (k × k tables), we performed an exact test of table symmetry. A sensitivity analysis was carried out, performing an exact test of table symmetry on 451 subjects that included 121 subjects (26.8%) lost to follow-up. The latter contributed to the analysis assuming that their pretest value of cardiovascular risk remained unchanged at 1 year. We coded a binary variable (delta) that was 1 if pretest CVD risk was higher that posttest CVD risk, and 0 otherwise. A low value of delta seemingly indicates a worst impact of the intervention. Since figures became too sparse in the subset of 330 workers undergoing intervention, occupational categories were merged in four groups: “basic metals” (original category); “other industries” (multiple categories); hospital workers (original category); other service workers (multiple categories). Using delta as outcome we fitted two models of logistic regression where the predictors were age, gender and work sectors (model 1); or age, gender, posttest smoking, posttest blood cholesterol, posttest systolic blood pressure and work sectors (model 2). In all models the work sector with the lowest value of delta was the reference. Odds ratios (OR) with 95% confidence interval (CI) and p-value were calculated with Stata 13 (Stata Corporation, College Station, Texas, USA). A statistical process had determined a detectable characteristic (CVD risk) associated with an increased chance of experiencing future unwanted outcomes. By identifying risk factors before the occurrence of the event, we developed targeted interventions to mitigate their impact. In order to obtain the prevented cases of CVD expected by the intervention, we multiplied in each class of risk the median CVD risk by the number of subjects. The sum of the latter values were the cases expected at pretest (A) or posttest (B). The number of CVD cases potentially prevented by the intervention was the difference (A – B). The cost outcome analysis was obtained by dividing overall cost by the number of potentially prevented cases. Quasi-experimental study designs, often described as pre-post intervention studies or before-and-after studies, are common in the medical literature. We used a quasi-experimental design and precisely the “one-group pretest-posttest design” . The key outcome was reducing the cardiovascular risk over the next 10 years. The latter was computed with an algorithm − proposed by the European Society of Cardiology and acknowledged by the Istituto Superiore di Sanità (Italian National Institute of Health) , combining information on sex, age, smoking habit, diabetes, blood pressure (mmHg), and blood cholesterol level (mg/dl). The modifiable risk factors (smoking, blood pressure, cholesterol), along with physical inactivity and alcohol intake were targeted by the intervention. There are multiple primary outcomes and this is, therefore, a multi-faceted worksite intervention to promote favorable changes in cardiovascular disease risk factors. Study size was set at about 5,000 workers based on the available budget rather than power considerations. Workers were employed in a wide range of occupational sectors (private and public businesses, industry and services) and were resident in various provinces (Padova, Verona, Vicenza) of Veneto, Northeastern region of Italy. An intervention across multiple occupational groups from separate geographic communities could increase confidence that the intervention was responsible for a change in outcome. The ultimate rationale of this procedure was, therefore, assessing consistency of results. Lastly, an unstructured interview (qualitative method) was conducted by a trained occupational physician (OP) among a small group of information-rich workers to answer the questions: «How did the intervention have that effect?»; and «What was the reaction of participants to the intervention?» . A snowball sampling method was used to select these workers. Two authors (LM and GM) chose the occupational physicians based on their scientific interests; and OPs chose the companies where they had the best relationships with both employers and employees. An invitation letter was sent to the management of these companies, explaining the aim and methods of the study; the companies willing to cooperate constituted the final sample of 5,536 workers. Before investigation, OPs were trained on counseling techniques, mainly focused on diagnosing the worker’s motivational state to change risky behaviors. Each OP was given a fixed incentive (20 euro) for each worker examined. All investigations were performed by OPs during the normal health surveillance and took place in the worksite. A computer aided interviewing software (Microsoft Access) was set up to store the data. Information was collected on: lifestyle factors (physical activity, cigarette smoking, alcohol consumption); past medical history (particularly, occurrence of CVDs, diabetes and obesity) and whether the subject was under therapy for diabetes, hypertension and hypercholesterolemia. At physical examination, blood pressure was measured twice at 4–5 minutes distance, always right arm and worker in upright position; the lowest value was used in the statistical analysis. The laboratory evaluation was performed in the workplace itself, collecting specimens of capillary blood and measuring glycaemia and cholesterolemia with portable devices. The procedure for collecting capillary blood specimens by fingerstick was that recommended by Centers for Disease Control . The cardiovascular risk over the next 10 years (CVD risk) was computed with the above algorithm and scored in classes (<5%; 5-10%; 10-15%; 15-20%; 20-30%; >30%). After 12 months, workers were re-examined with the same protocol. The exams began after a letter of information to the supervisory body for workplace safety and health of the relevant Local Health Authority. The project was run from January 2011 up to December 2012. Clearance by Ethics Committee was not necessary because the study was a mandatory activity deliberated by Veneto Region with a formal act (Regional Decree n. 2008 3 Aug 2010). All workers signed an informed consent at enrollment. The original 5,536 workers were divided in several subsets during the course of the study. We did not consider 2,062 subjects aged less than 40 years, 65 individuals not giving the consent, 36 patients already affected by CVD, and 537 under current therapy for hypertension, hypercholesterolemia and diabetes. 2,836 subjects older than 40 years without past CVD history or current therapy for diabetes, hypertension, and hypercholesterolemia underwent laboratory evaluation. 59 workers with missing information on one or more of the six components (sex, age, smoking, diabetes, blood pressure and cholesterol) used to estimate the cardiovascular risk, and 2,326 workers with a CVD risk below 5% were excluded. The remaining 451 underwent the educational intervention. Out of the latter, 330 workers (323 males and 7 females) with a CVD risk >5% were re-examined at 1 year, while 121 (26.8% = 121/451) were lost to follow-up. All analyses were carried out in the 323 males because seven subjects could not be used to arrive at any conclusions regarding female gender. There were two aspects: motivation and education. The most compelling argument for changing lifestyle was the estimated risk of CVDs in the next 10 years. Then subjects received an individualized counseling based on the presence of risk factors. Physical activity was generally mistaken with “exercise” (activity that is planned, structured, repetitive, and purposeful). In agreement with World Health Organization , workers were recommended to do at least 150 minutes of moderate-intensity aerobic physical activity throughout the week (example 30 minutes 5 times/week). For diet, recommendations were to limit energy intake from total fats and shift from saturated fats to unsaturated fats, increase consumption of fruits, vegetables, legumes, whole grains and nuts, limit the intake of free sugars and limit salt consumption from all sources . Subjects with hypertension and/or hypercholesterolemia or hyperglycemia were interviewed about their attitude towards lifestyle change; whenever they could not cope to recommendations they were referred to their general practitioners for medical therapy, even when the CVD risk was lower than 20%. Likewise, most smokers with CVD risk >5% were addressed to receive an anti-smoking counseling from counselors with educational competence. In other words, we used an aggressive approach that combined both a primary and secondary prevention. In order to determine whether the intervention had the intended effect we calculated proportions with the factor (such as, proportions of smokers, before and after), the ratio between proportions (point estimates and confidence intervals) and the exact McNemar significance probability (for 2 × 2 tables). For outcome variables with multiple discrete levels (k × k tables), we performed an exact test of table symmetry. A sensitivity analysis was carried out, performing an exact test of table symmetry on 451 subjects that included 121 subjects (26.8%) lost to follow-up. The latter contributed to the analysis assuming that their pretest value of cardiovascular risk remained unchanged at 1 year. We coded a binary variable (delta) that was 1 if pretest CVD risk was higher that posttest CVD risk, and 0 otherwise. A low value of delta seemingly indicates a worst impact of the intervention. Since figures became too sparse in the subset of 330 workers undergoing intervention, occupational categories were merged in four groups: “basic metals” (original category); “other industries” (multiple categories); hospital workers (original category); other service workers (multiple categories). Using delta as outcome we fitted two models of logistic regression where the predictors were age, gender and work sectors (model 1); or age, gender, posttest smoking, posttest blood cholesterol, posttest systolic blood pressure and work sectors (model 2). In all models the work sector with the lowest value of delta was the reference. Odds ratios (OR) with 95% confidence interval (CI) and p-value were calculated with Stata 13 (Stata Corporation, College Station, Texas, USA). A statistical process had determined a detectable characteristic (CVD risk) associated with an increased chance of experiencing future unwanted outcomes. By identifying risk factors before the occurrence of the event, we developed targeted interventions to mitigate their impact. In order to obtain the prevented cases of CVD expected by the intervention, we multiplied in each class of risk the median CVD risk by the number of subjects. The sum of the latter values were the cases expected at pretest (A) or posttest (B). The number of CVD cases potentially prevented by the intervention was the difference (A – B). The cost outcome analysis was obtained by dividing overall cost by the number of potentially prevented cases. Table shows in each occupational category the number of people, percent of males, mean and standard deviation of age, separately in the whole population and in the intervention group. In the latter subset, subjects were almost exclusively males and had a relatively advanced age. Results obtained with the one-group pretest-posttest design are reported in Table , showing the proportions with the factor at pretest and posttest, point estimate and confidence interval for the ratio between proportions and exact McNemar significance probability (criterion of positivity in the footnote). It can be seen, in short, that one year after the educational intervention there was a significant increase of physical activity (by 46%; p = 0.0000) and a significant decrease of smoking (by 16%; p = 0.0000), alcohol drinking (by 14%; p = 0.0017), systolic blood pressure (by 17%; p = 0.0009), blood cholesterol (by 15%; p = 0.0004) and cardiovascular risk (by 24%; p = 0.0000). In 108 posttest smokers cigarettes consumption decreased after the intervention; for example, heavy smokers (>20 cigarettes/day) were 43 before and 32 after the intervention. These changes were statistically significant (symmetry exact significance probability = 0.0135). Table shows the results of the sensitivity analysis. Even assuming that those lost to follow-up kept their pretest value of cardiovascular risk, there was a highly significant difference among pretest and posttest risk of cardiovascular disease (symmetry exact significance probability = 0.0000). In Table , the proportions with the factor (CVD risk >5%) were 82.2% (=361/439) at posttest against 100% at pretest. The ratio was 0.82 that indicates a 18% decrease of the cardiovascular risk after the intervention (whereas it was 24% in subjects re-examined at 1 year, Table ). Table shows the number of subjects in the newly merged occupational categories (work sectors) with the percent of subjects with delta = 1 (pretest cardiovascular risk > posttest cardiovascular risk). The lowest value of delta, indicating the worst impact of intervention, was observed among “basic metals” workers. Values of delta were about twofold higher in other work sectors, suggesting better outcomes. Table also shows ORs with 95% confidence interval (95% CI) and p-value of two models of logistic regression. It can be seen that, after taking into account the changes produced by the intervention (posttest smoking, cholesterol and blood pressure in model 2), the original differences among sectors became no longer significant. Table shows the expected number of CVD cases on the basis of cardiovascular risk stratification at pretest and posttest. The total expected cases would be 29 or 23; the difference (6 = 29 – 23) would represent the quota prevented. The relevant cost comprises resources coming from inside the manufacturing process − due to longer interruptions of work during the health surveillance along with the compensation given to OPs for the training received and the time spent in the educational intervention − and external resources acquired from outside the business. Only the latter can be easily quantifiable. They could be about 34,000 euro (14,000 euro for diagnostic kits and 20,000 euro for anti-smoking counseling) that is about 5,700 euro (=34,000/6) per each prevented case or about 10 euro (=14,000/3474) per each examined subject. An experienced occupational physician interviewed a small sample of workers already known as “opinion leaders” in their respective groups; the latter reported that both workers and employers perceived the intervention as useful. When the occupational physician was told to express his personal view, he answered: “we have gained esteem of workers”. Thus, quantitative statistically significant results and some qualitative evidence, together, suggested that the intervention had been effective. Governmental agencies and private sector groups are working hard to help employers to improve the health of their employees in an efficient, integrated, and cost-effective way. The objective is clear; it is the “how to” that is difficult . At present, the accepted gold standard for the evaluation of interventions in health care is the randomized controlled trial (RCT). The medical literature reports several RCTs on workplace health promotion programs. In a recent meta-analysis, a surprising observation is that studies with poor methodological quality reported an average effect size 2.9-fold larger than good-quality studies. Analyses stratifıed by outcome showed the same result for sickness absence, work productivity, and work ability. This might indicate publication bias: poor-quality studies are more frequently published if they show a greater effect . Another reason could be the fact that RCTs conducted in the worksite may be affected by a threat to internal validity that occurs when the intervention delivered to one group “diffuses” to another (contamination threat). This can easily happen when the intervention is educational in nature, since workers naturally share information with one another. A contamination is undesirable for an evaluation because it reduces the differences observed between the intervention and control groups . Therefore, the logistic requirements of RCTs often cause them to be unfeasible, especially for single smaller worksites. Given these limitations, the Cochrane Effective Practice and Organization of Care Group endorses three alternative methodologies for evaluating population interventions: (1) the non-RCT, (2) the controlled before-and-after study, and (3) the interrupted time series design. In a non-RCT, individuals or groups are allocated to experimental conditions using a nonrandom method. While nonrandom allocation may be more convenient in some circumstances, it increases the probability that unmeasured characteristics that may influence the outcomes introduces a systematic bias that could artificially exaggerate, or reduce, true intervention effects . It has been recently suggested that the optimal study design for a workplace health promotion program may be a quasi-experimental design in which medical cost data are collected for several years before the program and participants and nonparticipants are matched through propensity scoring . Another approach, whose advantages and methodological limitations have been recently discussed, is multiple baseline design. It involves conducting multiple time-series in multiple populations, each of which receives the intervention at a different point in time . Like RCTs, the multiple baseline design can demonstrate that a change in behavior has occurred, the change is a result of the intervention, and the change is significant. Especially important practical advantages over the RCT are that, first, this design requires fewer population groups and, second, communities may act as their own controls . As explained in , the present study was conducted in multiple occupational categories even though, because of time constraints, we could not stagger the intervention and all categories were examined concurrently. Individuals examined at posttest was a small fraction of the whole (6% = 323/5,536). This decreased the cost of prevention (about 5,700 euro for each prevented cases of cardiovascular disease) but involved a before-and-after design of the study. The latter is a non-experimental approach that must be used with caution, because of circumstances that threaten the ability to correctly infer whether the intervention had the desired effect. When the basis for choosing the intervention group is a greater apparent need for the intervention, an alternative explanation for the apparent success of the safety initiative is “regression-to the-mean” . In the present study, the intervention included only subjects with a high risk of cardiovascular disease. Thus, part of any decrease observed may have nothing to do with the intervention itself. Rather, CVD risk could be simply fluctuating closer to the average (year-to-year fluctuations). Strictly speaking, however, this consideration applies when one group is being examined and one outcome is being evaluated. As explained in , the present study is a multiple risk factor intervention conducted in multiple occupational categories. There was a consistent performance of all indicators of cardiovascular risk (Table ). Despite their original heterogeneity, work sectors were not found to influence the posttest risk of cardiovascular disease after taking into account the changes in modifiable risk factors produced by the intervention (Table ). On the other hand, the characteristics of the intervention group could be altered when enough people drop out of the study (dropout threat) . In the present study the before measurements were available; even in the extreme assumption that all dropouts kept their initial value of cardiovascular risk, a significant decrease of cardiovascular risk was observed at posttest (Table ). Overall, these pieces of evidence might increase confidence that the intervention was responsible for the change in the outcome. Cardiovascular risk can be viewed as a surrogate endpoint to investigate the primary event (cause-specific mortality). Adoption of surrogate criteria must, however, be regarded with some caution because the link between surrogate and primary event is not always linear; furthermore risk factor changes could not be maintained in the long term . The concept of the health promoting workplace is becoming increasingly relevant as more private and public organizations recognize that future success in a globalizing marketplace can only be achieved with a healthy, qualified and motivated workforce … For nations, the development of HPW will be a pre-requisite for sustainable social and economic development . In this context, health promotion activities fall into the mission of OPs, who should be already trained during the course of education. We tried to estimate a rough cost of the intervention. Regarding the financial impact of WHP programs, an extensive review of the literature and the major WHP study on cardiac risk factors showed a positive return on investment, demonstrating that such programs seem to pay for themselves. The results of this multi-faceted worksite intervention across multiple occupational groups from several geographic communities consistently converged on the evidence of a decreased risk of cardiovascular disease after an educational intervention. The intervention was reasonably priced and easy to integrate within the current organization of occupational health surveillance in Italy. |
Advancing
the Harmonization of Biopredictive Methodologies
through the Product Quality Research Institute (PQRI) Consortium:
Biopredictive Dissolution of Dipyridamole Tablets | c4342959-0d6b-412c-9082-98de35e73203 | 11468891 | Pharmacology[mh] | Understanding and visualizing the dissolution profiles of orally administered dosage forms in clinical and preclinical species has attracted great interest for formulation design and development, as well as selection in the pharmaceutical industry and in academia. For those reasons, biorelevant dissolution has been developed and advanced by the scientific community. − These biorelevant dissolution methodologies, which may also be biopredictive dissolution methodologies, incorporate key aspects of the human gastrointestinal physiology to evaluate the bioperformance of oral dosage forms, implementing quality by design (QbD) concepts to design and optimize the oral dosage forms. − Biorelevant dissolution experiments are still relatively new and have not been regulated, unlike compendial dissolution experiments created for the quality control of oral drug products, as found in, e.g., the United States Pharmacopoeia (USP). Individual researchers have developed biorelevant dissolution methodologies to better understand how formulations and compounds will perform in the body. As a result, those biorelevant dissolution profiles may look different among laboratories even if the same oral dosage forms are tested. The Product Quality Research Institute (PQRI), which is a nonprofit consortium of organizations that brings together members of the pharmaceutical industry, academia, and regulatory agencies to develop science-based approaches to regulation, has assembled an in vivo predictive dissolution and modeling working group (WG) to advance and harmonize in vivo predictive tools. The aims of this consortium are to address three questions: (1) can the PQRI working group (WG) members cross-validate their own experimental results by comparing dissolution profiles, identify the key experimental conditions, and move toward harmonizing their experimental methodologies; (2) will those generated dissolution profiles with those methodologies provide insightful information to guide in vivo studies; and (3) does the incorporation of the profiles into physiologically based biopharmaceutics modeling (PBBM) help to predict bioequivalence (BE), defined as predictions of the plasma profile by PBBM that lie within the 90% confidence interval for geometric mean ratio between 80 and 125% of the clinical result for both area under the curve (AUC) and maximum concentration ( C max ), of the in vivo data? The PQRI WG has five phases to achieve its goals, and the results of the first two phases (the first and second phases) out of five phases have already been published. Briefly, the PQRI WG studied the dissolution of ibuprofen (400 mg dose) and dipyridamole (50 mg dose) in the first two phases to understand if the WG members’ individual protocols for biorelevant dissolution methodologies would be able to produce consistent dissolution profiles of two model drugs, ibuprofen and dipyridamole. Precipitation in the media representing the upper small intestine at a 50 mg dose of dipyridamole, a weak base drug, was not observed by any of the WG members, regardless of dissolution methodology. This finding was attributed to the high p K a (p K a = 6.4) of dipyridamole together with the high media volume for the dissolution study at this low dose (∼400–500 mL). In those studies, the dissolution profiles of both ibuprofen and dipyridamole satisfied the BE criteria. On the basis of these studies, a more restrictive dissolution protocol was agreed upon by the WG. Since it is more challenging to obtain uniform biorelevant dissolution profiles if the test drug supersaturates and then precipitates in the GI tract, the dose was increased from 50 to 200 mg of dipyridamole. Although a 200 mg dose of dipyridamole is still a clinically relevant dose, this level of dipyridamole dose is only offered in an extended-release dosage form combined with aspirin. In the current studies, the WG members decided to use four immediate release (IR) tablets of 50 mg of dipyridamole to create a 200 mg dose. Biorelevant dissolution studies were conducted with 200 mg (50 mg × 4) of dipyridamole with or without the more restricted protocol to address the third and fourth objectives (Phase III and IV) of the overall five-phase project. Phase III consisted of using the higher dose of dipyridamole (200 mg) to generate biorelevant dissolution profiles with each WG member’s individual methodology and comparing results among WG members to determine which methodologies lead to BE and which methodologies lead to non-BE of the simulation with the clinical data. Phase IV consisted of using the higher dose of dipyridamole (200 mg) to generate biorelevant dissolution profiles with the more restricted protocols ( A,B), which were implemented on the basis of the Phase III results. These profiles were incorporated into the PBBM modeling software and assessed for BE or non-BE with the clinical data. Throughout this consortium project, all researchers performed biorelevant dissolution (with or without restrictions on methodology) on the same oral drug products and used the same batches of those oral products to eliminate any potential effects of the oral drug product source on the results. This overall exercise is intended to improve the quality and consistency of biorelevant dissolution and lead to harmonization of biorelevant dissolution methodologies. Eventually, the successful establishment of harmonized biorelevant dissolution is expected to improve oral product development, reduce animal studies, and, as a result, increase the success rate of clinical studies. A single batch of 50 mg dipyridamole tablets (lot no. 200203A, Rising Pharmaceuticals, East Brunswick, NJ, USA) were purchased and distributed to all members of the PQRI WG. For the preparation of biorelevant media, FaSSGF/FeSSIF/FaSSIF were purchased in powder form from Biorelevant.com (Biorelevant.com, London, UK) and prepared by each WG member before the biopredictive dissolution study. All other chemicals were analytical grade or HPLC grade. The WG member’s own methodologies for two-stage dissolution and/or transfer testing, which were based on their individual choices of experimental conditions and buffer media in Phase III, corresponded to the methods used by those members in Phase II. In Phase IV, all WG members switched over to the more restricted protocol for two-stage and transfer dissolution methodologies. All experimental conditions and methods are summarized in and , while historical changes in the experimental conditions are captured in . The dissolution profiles obtained by the WG were coupled with in silico modeling using GastroPlus version 9.8 (SimulationPlus, Inc., Lancaster, CA) to produce human plasma profiles. The simulated profiles were compared with clinical data to determine BE or non-BE to evaluate which of the dissolution methodologies were able to predict in vivo dissolution profiles , The dissolution methodologies conducted by each of the WG members in the Phase III studies are summarized in . The more restricted protocol for dissolution testing for dipyridamole tablets on Phase IV is summarized in . A more detailed description of the experimental condition presented in has been reported previously. The oral drug absorption of dipyridamole was computed on the basis of the physicochemical, pharmacokinetic, and drug dissolution properties of dipyridamole, which is weakly basic and classified as a Biopharmaceutics Classification System (BCS) class IIb drug, according to the simulation conditions proposed in the literature. − Single simulations were performed with the biorelevant dissolution profile from each WG member to predict the pharmacokinetic (PK) profile of 200 mg (50 mg × 4) dipyridamole IR tablets under fasted-state conditions. General input parameters for in silico simulation of dipyridamole were obtained from the literature and are summarized in . , , , The biopredictive dissolution profiles were incorporated into the in silico model as “controlled release” profiles. This selection prevents the in silico software from dictating the drug release profile of dipyridamole according to the physicochemical properties of the drug and the physiological settings in the software. which may not be able to display the supersaturation. Drug absorption from the stomach was assumed to be negligible in this set of predictions. The duration of the simulations was 24 h. The clinical data are regenerated and are displayed in and to portray the variability. In this modeling exercise, the predicted plasma profile based on the dissolution profile was compared to the BE range based on the average plasma data. This allowed us to evaluate how close the biorelevant dissolution profile is to the in vivo dissolution profile. However, note that the purpose of this simulation was not to provide a full prediction of the observed clinical plasma profile but rather to test the criticality of the biorelevant dissolution profile as a variable input parameter. All predictions were performed using the GastroPlus standard physiological conditions: Human Physiological-Fasted and Opt LogD Model SA/V 6.1. The predicted plasma profiles were compared with clinical trial results for dipyridamole pharmacokinetics to evaluate whether incorporating the results from the various dissolution studies into GastroPlus was able to predict the in vivo performance. , If the predicted plasma profile by PBBM based on the biorelevant dissolution profile fell within the 80–125% of clinical result for AUC and maximum concentration ( C max ), then the profile was considered to be BE. Thus, that biorelevant dissolution profile successfully predicted the drug dissolution in vivo . Phase III: Dipyridamole Dissolution Using WG Members’ Own Dissolution Methodologies The dissolution profiles of dipyridamole using (A) two-stage methodology and (B) transfer methodology are summarized in . In two-stage dissolution methodologies, four institutes reached almost complete dissolution of dipyridamole at the end of the first stage, while Institute C observed incomplete dissolution in the first stage regardless of buffer species and capacity ( A). This incomplete dissolution (∼10% dissolution) was caused by insufficient mixing at the low buffer volume (20 mL) made available in the vessel. In transfer methodologies, since drug absorption in the stomach is generally insignificant, only the drug dissolution profiles under the small intestinal stage are plotted ( B). The results from Institutes D, F, and G exhibited less than 40% dissolution of dipyridamole in the intestinal stage, while the result of Institute E exhibited ∼70% dipyridamole dissolution at the last time point. The higher % dissolved reported by Institute E may be traced back to two sources: (1) there was no transfer of gastric media in the first 30 min of experiment so the majority of dipyridamole would be dissolving when the transfer was started, and (2) the transfer rate was slower than others ( t 1/2 = 74 min) ( B). All results eventually exhibited ∼20% drug dissolution in the small intestinal stage, except for the result of Institute C where the drug did not dissolve. Since the p K a of dipyridamole is close to the pH of the dissolution buffer (pH 5.8–6.8), the individual choice of the buffer capacity, species, and/or volume could reasonably be expected to affect its dissolution. Phase IV-1: Dipyridamole Dissolution with the More Restricted Protocol—Part 1 The dissolution profiles of dipyridamole obtained with the more restricted protocol described in A are summarized in with two-stage methodology results in A and transfer methodology results in B. Using two-stage dissolution methodologies, all institutes displayed almost complete dissolution of dipyridamole at the end of 30 min in the gastric stage and excellent uniformity of the dissolution profiles. In transfer dissolution testing, only drug dissolution profiles under the small intestinal region are plotted ( B). The results of Institutes E and H initially exhibited similar dissolution rates of dipyridamole, while the results of Institutes D, F, and G exhibited slower dipyridamole dissolution in the intestinal stage. All transfer dissolution profiles reached ∼20% at the end of the experiment except Institute H. Different from other WG members, Institute H adopted a two-vessel transfer methodology with a constant volume in the second vessel. When the gastric content was transferred to the intestinal stage, the same volume was discarded from the small intestinal stage to maintain the media volume in the second vessel. Since the concentration of dipyridamole was measured in the constant volume of the intestinal stage, the full dissolution profile, in that experiment, was calculated and regenerated on the basis of the concentration in the second vessel, as well as in the discarded media volume at the given time. Therefore, the result did not concur with the decline in concentration reported by Institutes D, E, F, and G. Phase IV-2: Dipyridamole Dissolution with the More Restricted Protocol—Part 2 The dissolution profiles of dipyridamole obtained with the more restricted protocol described in B are summarized in with results for the two-stage methodology in A and transfer methodology in B. In two-stage dissolution testing, all four institutes displayed complete dissolution of dipyridamole in the gastric stage and excellent uniformity in the entire dissolution profiles ( A). In transfer dissolution testing, only drug dissolution profiles in the small intestinal region are plotted ( B). The dissolution results of Institutes E and H exhibited a faster initial dissolution rate in the intestinal stage than those of Institutes D, F, and G. As in Phase IV-1, Institute H implemented a constant media volume in the second vessel (intestinal stage) by discarding extra volume. In this experiment, the full dissolution profile was calculated and regenerated on the basis of the concentration in the second vessel, as well as in the discarded media volume at the given time. As a result, the dissolution profile of Institute H might be an overestimate . Dipyridamole Modeling The plasma profiles of dipyridamole were simulated on the basis of the biorelevant dissolution profiles produced by the WG. The purpose of this simulation was not to provide a fully accurate prediction of the observed clinical plasma profile but rather to directionally assess the criticality of differences in in vitro conditions, dissolution apparatus, and dissolution methodologies. The incorporation of biorelevant dissolution profiles must be optimized to correctly predict clinical plasma profiles. Needless to say, the simulations provided the direction and rank order of the criticality of those differences in the experimental conditions. The prediction of dipyridamole absorption at a dose of 200 mg from the two-stage biorelevant dissolution profiles all exhibited non-BE ( A, A, and A). As dissolution of 200 mg of dipyridamole was complete in the first 30 min in two-stage dissolution, the in silico modeling displayed much higher C max values than clinical data ( A, A, and A). The simulation results generated with two-stage dissolution testing thus overestimated the predicted plasma profile. In transfer dissolution testing, only Institute D obtained a biorelevant dissolution profile that led to BE in terms of both C max and AUC using its own method in , A, , and and and , while three Institutes produced biorelevant dissolution profiles that satisfied BE in either C max or AUC, and Institute E narrowly missed BE when method IV-2 was applied ( B, , and and ). As dipyridamole has a high p K a (p K a 6.4) value and all biorelevant dissolution requires an aqueous volume of 400–750 mL at the end of the experiment, this leads to less precipitation and, as a result, overestimation of oral absorption. The slight reduction in gastric acidity and buffer volumes from Phase III to Phase IV-2 studies likely explains the improvement in the modeling results. Since the gastric conditions are important for dipyridamole dissolution, an acidic pH, as well as adequate dissolution time and hydrodynamics, are necessary for the tablets to disintegrate and dissolve. Thus, dissolution methodologies should incorporate physiologically relevant gastric emptying times and complete transfer to the small intestinal environment to achieve more meaningful predictions. The experimental conditions, such as buffer pH, buffer species/capacity, volume, and stirring speed, as well as paddle and vessel sizes, were all shown to affect the dipyridamole dissolution profiles. The dissolution profiles of dipyridamole using (A) two-stage methodology and (B) transfer methodology are summarized in . In two-stage dissolution methodologies, four institutes reached almost complete dissolution of dipyridamole at the end of the first stage, while Institute C observed incomplete dissolution in the first stage regardless of buffer species and capacity ( A). This incomplete dissolution (∼10% dissolution) was caused by insufficient mixing at the low buffer volume (20 mL) made available in the vessel. In transfer methodologies, since drug absorption in the stomach is generally insignificant, only the drug dissolution profiles under the small intestinal stage are plotted ( B). The results from Institutes D, F, and G exhibited less than 40% dissolution of dipyridamole in the intestinal stage, while the result of Institute E exhibited ∼70% dipyridamole dissolution at the last time point. The higher % dissolved reported by Institute E may be traced back to two sources: (1) there was no transfer of gastric media in the first 30 min of experiment so the majority of dipyridamole would be dissolving when the transfer was started, and (2) the transfer rate was slower than others ( t 1/2 = 74 min) ( B). All results eventually exhibited ∼20% drug dissolution in the small intestinal stage, except for the result of Institute C where the drug did not dissolve. Since the p K a of dipyridamole is close to the pH of the dissolution buffer (pH 5.8–6.8), the individual choice of the buffer capacity, species, and/or volume could reasonably be expected to affect its dissolution. The dissolution profiles of dipyridamole obtained with the more restricted protocol described in A are summarized in with two-stage methodology results in A and transfer methodology results in B. Using two-stage dissolution methodologies, all institutes displayed almost complete dissolution of dipyridamole at the end of 30 min in the gastric stage and excellent uniformity of the dissolution profiles. In transfer dissolution testing, only drug dissolution profiles under the small intestinal region are plotted ( B). The results of Institutes E and H initially exhibited similar dissolution rates of dipyridamole, while the results of Institutes D, F, and G exhibited slower dipyridamole dissolution in the intestinal stage. All transfer dissolution profiles reached ∼20% at the end of the experiment except Institute H. Different from other WG members, Institute H adopted a two-vessel transfer methodology with a constant volume in the second vessel. When the gastric content was transferred to the intestinal stage, the same volume was discarded from the small intestinal stage to maintain the media volume in the second vessel. Since the concentration of dipyridamole was measured in the constant volume of the intestinal stage, the full dissolution profile, in that experiment, was calculated and regenerated on the basis of the concentration in the second vessel, as well as in the discarded media volume at the given time. Therefore, the result did not concur with the decline in concentration reported by Institutes D, E, F, and G. The dissolution profiles of dipyridamole obtained with the more restricted protocol described in B are summarized in with results for the two-stage methodology in A and transfer methodology in B. In two-stage dissolution testing, all four institutes displayed complete dissolution of dipyridamole in the gastric stage and excellent uniformity in the entire dissolution profiles ( A). In transfer dissolution testing, only drug dissolution profiles in the small intestinal region are plotted ( B). The dissolution results of Institutes E and H exhibited a faster initial dissolution rate in the intestinal stage than those of Institutes D, F, and G. As in Phase IV-1, Institute H implemented a constant media volume in the second vessel (intestinal stage) by discarding extra volume. In this experiment, the full dissolution profile was calculated and regenerated on the basis of the concentration in the second vessel, as well as in the discarded media volume at the given time. As a result, the dissolution profile of Institute H might be an overestimate . The plasma profiles of dipyridamole were simulated on the basis of the biorelevant dissolution profiles produced by the WG. The purpose of this simulation was not to provide a fully accurate prediction of the observed clinical plasma profile but rather to directionally assess the criticality of differences in in vitro conditions, dissolution apparatus, and dissolution methodologies. The incorporation of biorelevant dissolution profiles must be optimized to correctly predict clinical plasma profiles. Needless to say, the simulations provided the direction and rank order of the criticality of those differences in the experimental conditions. The prediction of dipyridamole absorption at a dose of 200 mg from the two-stage biorelevant dissolution profiles all exhibited non-BE ( A, A, and A). As dissolution of 200 mg of dipyridamole was complete in the first 30 min in two-stage dissolution, the in silico modeling displayed much higher C max values than clinical data ( A, A, and A). The simulation results generated with two-stage dissolution testing thus overestimated the predicted plasma profile. In transfer dissolution testing, only Institute D obtained a biorelevant dissolution profile that led to BE in terms of both C max and AUC using its own method in , A, , and and and , while three Institutes produced biorelevant dissolution profiles that satisfied BE in either C max or AUC, and Institute E narrowly missed BE when method IV-2 was applied ( B, , and and ). As dipyridamole has a high p K a (p K a 6.4) value and all biorelevant dissolution requires an aqueous volume of 400–750 mL at the end of the experiment, this leads to less precipitation and, as a result, overestimation of oral absorption. The slight reduction in gastric acidity and buffer volumes from Phase III to Phase IV-2 studies likely explains the improvement in the modeling results. Since the gastric conditions are important for dipyridamole dissolution, an acidic pH, as well as adequate dissolution time and hydrodynamics, are necessary for the tablets to disintegrate and dissolve. Thus, dissolution methodologies should incorporate physiologically relevant gastric emptying times and complete transfer to the small intestinal environment to achieve more meaningful predictions. The experimental conditions, such as buffer pH, buffer species/capacity, volume, and stirring speed, as well as paddle and vessel sizes, were all shown to affect the dipyridamole dissolution profiles. Researchers in academia and pharmaceutical companies have tried to understand how oral formulations would perform in the human GI tract so that oral absorption can be predicted and the oral formulation can be optimized. Better understanding of the bioperformance of oral dosage form would bring huge benefits prior to dosing in both preclinical and clinical pharmacokinetic studies. However, there is currently no guideline for biorelevant dissolution methodology and its regulation, in contrast to the compendial dissolution methodologies for, e.g., quality control. So far, each pharmaceutical company and academic institute has developed their own methodology to test oral compounds and products of interest and to predict their bioperformance. Multiple different methodologies have been proposed to predict the bioperformance of oral drug products. , , − These different methodologies can be divided into two major biorelevant dissolution types, two-stage dissolution and transfer methodologies. The two-stage dissolution has been a popular biorelevant dissolution methodology because of its relatively easy setup without any requirements for specific equipment and its ability to estimate the bioperformance of oral formulations. , , The test oral dosage form is exposed to two different pH environments in just one dissolution vessel in this methodology, with a concentrated solution of the intestinal phase added to the first phase (gastric phase) at a given time point to switch the conditions over to a composition representing the small intestine. The other biorelevant dissolution methodology is often referred to as a transfer methodology, which usually involves two vessels containing two different pH environments. Although it is based on the same principle as the two-stage dissolution methodology, mechanical transfer of the gastric phase medium into a vessel containing the small intestinal medium is performed to simulate the gastric emptying. This is intended to improve the evaluation of the bioperformance. , , , , Different dissolution media have been adopted in these dissolution methodologies to mimic the gastric conditions, e.g., for the gastric stage 0.01–0.1 N hydrochloric acid (HCl), simulated gastric fluid (SGF), and fasted-state simulated gastric fluid (FaSSGF). Less acidic buffers, such as maleate buffer, to represent achlorhydric conditions (pH 4.0 to 6.0) have also been proposed. Likewise, various concentrations and buffer species, like simulated intestinal fluid (SIF), and variations on fasted-state simulated intestinal fluid (FaSSIF), within the pH range of 6.5 to 7.5 have been used in these biorelevant dissolution methodologies to mimic the small intestinal conditions. , − Those biorelevant methodologies have been mainly used to investigate the bioperformance of model drugs that have pH-dependent solubility. As seen in , the empirical conditions of biorelevant methodologies by the PQRI WG members varied substantially, even though those methodologies have core similarities that reflect an understanding of the human GI physiology and exhibited similar precipitation outcome to the previous report. , As a result, biorelevant dissolution profiles generated by the WG members in Phase III were quite disparate , and only one methodology met the bioequivalent criteria . Going through the projects with the knowledge that the same lot/batch oral drug product had been used in all experiments, the WG was able to determine the important experimental factors in biorelevant dissolution methodologies and formulate an approach aimed at better regulating the range of experimental conditions. The PQRI WG worked through two specific protocols ( and ) to demonstrate how harmonization of biorelevant dissolution profiles , which exhibited the similar range of drug concentration of the previous reports, in vitro and in vivo , could lead to more institutes being able to satisfy the BE criteria when the dissolution profiles were incorporated into PBBM ( and ). , Generally, the harmonized two-stage dissolution methodology produced very uniform dissolution profiles but overestimated the drug dissolution in the first (gastric) stage, even in the more restricted protocols. Hence, the C max values generated in PBBM were also overestimated ( and – ). The initial dissolution of dipyridamole in the vessel representing small intestinal conditions in the transfer method was well controlled by PQRI WG when using the more restricted protocols ( and – ). However, the complete harmonization of the dissolution profiles, i.e., display of the same dissolution rate and precipitation rate among those dissolution profiles, was not achieved. This was attributed to the different hydrodynamics created by using different vessel sizes and paddle sizes ( and ). The differences in hydrodynamics should be studied in more detail to better understand their influence on supersaturation/precipitation profiles. In summary, the transfer methodology appears to be more promising for predicting plasma profiles. However, the WG needs to do more work to define the optimum methods and conditions more closely with respect to the equipment specifications. Additionally, the way that the dissolution profiles are entered into the simulation software needs further optimization. In this step-by-step project, the PQRI WG is successfully working toward harmonizing biopredictive dissolution tools and conditions and defining the optimal condition for meaningful in vivo prediction. |
Curcumin Derivatives in Medicinal Chemistry: Potential Applications in Cancer Treatment | 85bec3cc-2fcb-4923-8b42-c0daf846ad54 | 11596437 | Pharmacology[mh] | Curcumin or (1 E ,6 E )-1,7-bis(4-hydroxy-3-methoxyphenyl)hepta-1,6-diene-3,5-dione, is a compound present in the rhizome of the Curcuma genus plants, most notably in turmeric ( Curcuma longa L.). Historically, curcumin’s vivid color led to its use as a dye, while its flavor made it popular as a kitchen spice. It is also worth remembering that turmeric, containing curcumin, has been used in traditional medicine for many centuries . Curcumin is classified as a diarylheptanoid. Diarylheptanoids are a group of compounds characterized by two aromatic rings connected by a seven-carbon chain. Depending on the type of seven-carbon unit connecting these rings, diarylheptanoids are divided into four subgroups. These include: (i) linear diarylheptanoids, of which curcumin is a representative; (ii) tetrahydropyran diarylheptanoids, which are characterized by a tetrahydropyran ring in the seven-carbon chain, such as centrolobin; (iii) diarylethyl heptanoids, which contain an aryl-arylethyl bond, such as Acerogenin A; and (iv) biphenyl-diarylethyl heptanoids, in which a biphenyl bridge is present (e.g., Acerogenin E). Diarylheptanoids have attracted intense interest from medicinal and synthetic chemists because of their wide-ranging biological activities . There is an ongoing interest in the use of curcumin in medicine, especially against cancer and pathogens. Of great importance are its anti-inflammatory and antioxidant properties and potential applications in the treatment of diabetes cardiovascular, and inflammatory bowel diseases . Biological activities of curcumin are often the results of curcumin promoting or inhibiting certain enzymes and cellular pathways . This is one of the reasons curcumin exhibits unique anticancer activity, inducing apoptosis and inhibiting tumor growth . Studies have shown curcumin’s effectiveness against many types of cancer, including breast, lung, kidney, uterus, cervical and prostate cancers, squamous cell carcinoma of the head and neck, and brain tumors. Curcumin has also shown potential in suppressing chemoresistance in various cancers. The compound and its derivatives mimic estrogen and compete via aryl hydrocarbon receptors for entry to cells. The benefits of co-delivering curcuminoid derivatives were studied in many breast cancer cell lines (e.g., MDA-MB-468, MDA-MB-231, BT-549, BT-20, and MCF-7). The current understanding of curcumin’s role and that of its derivatives in chemosensitization is based on its multi-sectoral action—reactive oxygen species (ROS) generation, activity modulation of protein kinases, pro-apoptotic regulators, histone deacetylase, telomerase, efflux pumps, and many more . Despite these benefits, the clinical use of curcumin is hindered by its low chemical stability and limited solubility in water, resulting in poor bioavailability after oral ingestion . In addition, rapid elimination from the human body limits the therapeutic use of this compound . The hydrophobic nature of curcumin limits its cellular uptake, as it tends to bind to the fatty acyl chains of membrane lipids rather than efficiently entering the cytoplasm. To overcome these challenges and enhance curcumin’s anticancer activity, its structural modifications are being researched to improve selective toxicity against cancer cells, increase bioavailability, and enhance stability . The physicochemical shortcomings of curcumin are associated with the presence of the enol fragment and an active methylene group in its structure. These features are responsible for the low solubility and limited stability of curcumin in the biological media . Modifications of curcuminoids quite often aim to increase the solubility and improve the compatibility within the enzyme active centers. Two main approaches are used to obtain new curcuminoids: (i) modification of naturally occurring curcuminoids and (ii) synthesis starting from simple precursors. Modifications to the curcumin molecule primarily target the hydroxyl and methoxyl groups, as well as the β-diketone moiety, which undergoes tautomerism to its enol form in an alkaline environment. The hydroxyl groups may be converted to esters, whereas the ß-diketone moiety may undergo condensations or act as ligands in complexes. The active methylene group between two carbonyls is acidic in nature and thus susceptible to electrophilic attack. Synthesis from simple precursors typically involves aryl compounds and functionalized hydrocarbon chains or rings . This review highlights promising modifications of curcumin derivatives with potential anticancer activity. This review attempts to answer two fundamental questions: (i) whether and to what extent curcumin and its derivatives can be used in the therapy of selected cancers, and (ii) in what direction should we proceed in the design of new active curcumin derivatives? For this purpose, the first part covers a review of the literature on the efficacy of curcumin and its derivatives in selected types of cancer, proving the validity of further studies aimed at chemical modification of its molecule. In turn, the second part of the review includes the structure/pharmacological activity relationship of curcumin derivatives. To sum up, the aim of this review is to indicate which cancers have potential for the use of curcumin and its derivatives and in what direction the research on the anticancer activity of newly synthesized derivatives could be conducted. In this chapter, selected curcumin derivatives with potential activity against cancer are discussed. In the following subsections referring to several types of cancers, studies on curcuminoid derivatives are presented . 2.1. Breast Cancers In 2022, there were an estimated 20 million new cases worldwide and 9.7 million cancer deaths. For women, breast cancer alone is predicted to contribute to 31% of female cancer cases in 2023 . Breast cancer is clinically categorized into five subtypes depending on the expression of estrogen receptors (ER), progesterone receptors (PR), and the human epidermal growth factor receptor 2 (HER2) oncogene. Tumors that express ER and/or PR are classified as receptor-positive breast cancers, while those lacking ER, PR, and HER2 expression are referred to as triple-negative breast cancers (TNBC). Presently, the primary treatment options for breast cancer include chemotherapy, endocrine therapy, oligo-small molecule inhibitor therapy, and surgical removal of the tumor . Curcumin and its derivatives have demonstrated significant efficacy against various cancers. Evidence from in vivo and in vitro studies indicates that curcumin exhibits breast anticancer properties through numerous mechanisms, including induction of cell cycle arrest and apoptosis, modulation of relevant signaling pathways and gene expression, inhibition of tumor cell proliferation, suppression of metastasis, and prevention of angiogenesis. Detailed documentation has shown that the main targets and signaling pathways interacting with curcumin include: nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB), p53 protein (p53), vascular endothelial growth factor (VEGF), ROS, PI3K/AKT/mTOR pathway, protein kinase B, Wnt/β-catenin, JAK/STAT signaling pathway, ER, HER2, and microRNA. As mentioned before, the clinical use of curcumin and its efficacy is limited due to its unfavorable physicochemical properties despite such promising effects. This chapter reviews recent advances in research on the synthesis of curcumin derivatives, focusing on their action in breast cancer therapy. Afzal et al. condensed phenyl urea group with two carbonyl groups of curcumin. The authors obtained three pyrimidinone analogs, among which 1 , visualized in , revealed the highest inhibitory activity towards MCF7 (breast cancer cell line). This effect could be assigned to an affinity of 1 to the active site of the epidermal growth factor receptor (EGFR). The phenomenon was further examined by molecular docking studies and led to the observation that compound 1 had the strongest binding affinity to EGFR among the studied compounds, with three different types of interactions. Data on growth inhibitory potential (at the concentration of 10 µM) was collected from nine different types of cancers and 59 cell lines, six of which were breast cancer cell lines. The mean growth inhibitory potential from these six lines was calculated as 75%, which is an improvement compared to curcumin growth inhibitory potential—56% . A pyrazole derivative of curcumin 2 revealed lower mean growth inhibitory percentage points against the same six types of breast cancer cell lines compared to compound 1 . Structurally similar compounds were studied by Rodrigues et al. , who assessed four five-membered heterocyclic derivatives of curcumin 3 – 6 (see ) using in silico and in vitro studies on the MCF7 cell line. The retention of characteristic curcumin scaffold, namely the carbonyl chain and the aryl side chain, and a modification of β-diketone moiety played a fundamental role in improving the biological properties. Curcuminoids 3 and 6 were less potent than curcumin, based on IC 50 values. Although the substituted pyrazole derivative 5 presented a satisfactory IC 50 value, the compound was less soluble and tended to precipitate. The most potent derivative was 4 , with an IC 50 value lower than that noted for curcumin but higher than that of 5 . In contrast to 5 , derivative 3 did not reveal any physicochemical shortcomings. Additionally, in silico calculations showed that the absorption from the gastrointestinal tract would be the highest for 3 , and the compound would have a good binding affinity to key proteins that play a role in cancer progression. All things considered, the isoxazole analog was identified as a promising lead structure for further evaluation . Panda et al. esterified curcumin using amino acids and screened them for anticancer, antimicrobial, anti-inflammatory, and analgesic properties (compounds 7 – 10 in ). The novel conjugates revealed a promising effect on the MCF7 cells (IC 50 values between 9.15 and 11.52 µM), more profound than towards lung and prostate cancer cell lines. Interestingly, analogs with protected amines exhibited IC 50 values exceeding 100 µM . Panda et al. continued their work on esterified curcumin derivatives and, in another article, reported on dichloroacetic derivatives of curcumin ( 11 and 12 in ) conjugated directly via the ester bond or an amino acid linker—glycine, L/β-alanine, L-phenylalanine, or γ-aminobutyric acid. Dichloroacetic acid is a potent anticancer agent that suffers from worrisome toxicity. A total of six novel compounds were used in a clonogenic survival assay, which showed the suppression of the proliferation of T-47D and MDA-MB-231 breast cancer cell lines but not the healthy MCF10A epithelial cells from the human mammary gland. The activity of the compounds was about 8–16 times greater towards cancer cell lines, with EC 50 values up to a nanomolar level at 424 and 778 nM for 11 and 12 , respectively. Further study of 11 in the mouse mammary tumor model showed significantly reduced tumor volume gain compared to the control group and dichloroacetic acid alone. Moreover, no increased systemic toxicity was observed as the body weight, organ histology, and blood parameters were optimal. Finally, the in silico studies predicted that compound 11 would have a better inhibitory affinity towards DYRK2 (a protein that promotes proliferation) and lower towards hERG (inhibition causes cardiac-related disorders), would be weaker metabolized by CYP2D6 and would not be a P-glycoprotein−P-gp (an efflux pump) substrate . Different kinds of derivatives were obtained by Hsieh et al. , of which compounds 13 and 14 were mainly assessed. Both compounds revealed better activity than curcumin. It’s worth noting that hydroxyl groups of curcumin were substituted with dihydroxyacids in a series of reactions to yield 13 . As compared to curcumin, the novel analog showed good stability, higher hydrophilicity, and solubility in water and alcohol, indicating better potential. It was further examined both in vivo and in vitro on mice models and the MDA-MB-231 breast cancer cell line, respectively. The calculated IC 50 value was 6.1 times lower against MDA-MB-231 than curcumin, and the value was also lower than that obtained for monosubstituted derivative. However, it was slightly higher than that observed for the analog with a longer alkyl chain—bis(hydroxymethyl)butanoic acid. Moreover, swapping the position between ether and ester in the aromatic rings increased the IC 50 value. Interestingly, elongation of the alkyl group in the swapped ether-ester compound resulted in a decrease of the inhibitory potential, which was a different result than in the non-swapped ether-ester compound. However, all mentioned derivatives in this paragraph were still more potent than the parent curcumin. The effect of tautomerization between keto and enol was also studied. The authors substituted the methylene group in between the carbonyls with two methyl groups—preventing the formation of enol form and concluded that the novel derivative had a lower IC 50 value. Further evaluation of compound 13 proved a synergistic effect with doxorubicin against the doxorubicin-resistant MDA-MB-231 cancer cell line. Furthermore, administration of 13 to the MDA-MB-231 xenograft nude mice model reduced the tumor size by 60%, whereas in combination with doxorubicin by 80% of that of the control group. Additionally, there was no difference in body weight, behavior, and blood chemistry between treated and untreated mice . The effect could be attributed to G2/M phase arrest, apoptosis, and autophagy of the treated cancer cells, as evaluated in a following study . Furthermore, curcumin derivative 13 decreased invasiveness by MDA-MB-231 cells in concentrations below 5 µM by inhibiting the secretion of proteins that cause the degradation of gelatin and collagens, as well as inhibiting the MAPK/ERK/AKT signaling pathway . Curcumin dimers are known to be more stable and to have better inhibitory potential towards cancer cells. Moreover, curcumin-piperidone derivatives also show better antiproliferative potential on cancer cells. Therefore, a combination of the two modifications was a driving force for studies by Koroth et al. and Nirgude et al. . Two interesting dimers ( 15 and 16 ) are shown in A. Those compounds bear chloro- or nitro- substituents in the aromatic rings, and the hydrocarbon chain between the aromatic rings is shorter. The authors stated that adding an electron-withdrawing substituent (-NO 2 , -Cl) to the parent structure enhanced the antiproliferative potential. Compound 15 was tested in vitro against the MCF7 and metastatic MDA-MB-231 breast cancer cell lines. The effective dose was at the nanomolar lever, namely 54 and 127 nM for MDA-MB-231 and MCF7, respectively. The derivative 15 was 100 times more potent than curcumin and revealed better potential against less differentiated and more metastatic cancer cells. At the same time, it did not show any cytotoxicity against peripheral blood mononuclear cells in concentrations of up to 150 nM. The mechanism of action was via the activation of the intrinsic pathway of apoptosis. In addition, the migration capacity rate of MDA-MB-231 was diminished, and the effect was attributed to the downregulation of the expression of matrix metalloproteinase 1 . Research on compound 15 was continued in subsequent in vivo studies on EAC mice tumor allografts . Compound 15 was effective and demonstrated a synergistic effect with doxorubicin, cisplatin, and olaparib. Simultaneously, derivative 15 was found to be safe as it did not cause any histopathological or body mass changes as compared to the control group. Authors found evidence for the pleiotropic action of compound 15 ; there were found 74 and 114 changes in miRNA and mRNA expressions, respectively. The authors also described a unique miRNA-mRNA interaction network, which indicated an impact on the regulation targets of NF-κB . Analog 16 was also evaluated and revealed similar biological properties to 15 with IC 50 at 31 nM against MDA-MB-231 . Other modifications described in the literature include replacing the β-diketone group with cyclohexanone (which increases the stability and bioavailability) and substituting some hydrogens in the aromatic rings with N-alkyl-methanimines or N-alkyl-methanamines . Both the synthesized imino and amino curcumin substituents (compounds 17-32 , B) showed better anticancer potential than curcumin and methotrexate, as the novel analogs had IC 50 values towards MCF7 in the range of 10–300 μg/mL. The substitution of piperidine ( 19 ) did not change IC 50 values as compared to curcumin, but the replacement of the piperidine ring to cyclohexane drastically decreased the IC 50 value ( 18 ). The change to the pyridine ring (compounds 20 – 21 ) also affected IC 50 , with the values slightly above those for compound 18 but still almost 6 times lower than that for curcumin. In addition, the position of the nitrogen atom in the pyridine ring impacted IC 50 , as compound 20 had lower IC 50 values than 21 . Compounds with longer linkers between the heterocycle and imine, 22 – 23 , had lower IC 50 values than those with very rigid structures 20 – 21. Approximately a two-fold decrease of IC 50 was observed for S stereoisomers of 1-phenylethan-imine (compound 25 ) as compared to the R isomers (compound 26 ). Interestingly, the reduction to 1-phenylethan-amine did not change the IC 50 value a lot, but an opposite relationship in reduced imines could be observed—isomer R ( 30 ) had lower IC 50 as compared to S isomer ( 31 ). Differently, for piperidine and pyridine analogs, the reduced compounds (imine to amine) 27 – 29 showed lower IC 50 . Overall, three compounds, 18 , 27 , and 28 were acknowledged as the most potent in the study, with IC 50 values lower than that of methotrexate . Kostrzewa et al. developed structurally similar 4-piperidone ring-fused curcumins those exhibited antioxidant or ROS-generating properties, which induced PTP1B enzyme degradation (compounds 33 – 35 , C) . The introduction of a nitrogen atom and protection of the hydroxyl group by acetyl groups in curcumin were aimed to increase the cytotoxicity effect and reduce metabolism, respectively. In the nitro blue tetrazolium test, 35 showed the best antioxidant properties, whereas in the in vitro cytotoxicity test on MCF-7 and MDA-MB-231 cell lines, 33 and 34 revealed the best IC 50 values and were even better than curcumin. Interestingly, structural isomers 34 and 35 showed different properties, namely the isomer with 4-piperidone moiety closer to the non-substituted aromatic ring 35 revealed better antioxidant properties. Compounds 33 and 34 were further evaluated and both showed similar cytotoxicity towards the MCF-7 cell line, but against the MDA-MB-231 cell line, the 34 showed about two times greater inhibitory potential compared to 33 . Thus, the protection of the hydroxyl group played a role in augmenting the anticancer effect on more malignant cell lines. Both 33 and 34 were found to generate ROS in cancer cell lines but not in the HaCaT cells. Compound 33 was evaluated as a photosensitizer in MDA-MB-231, and the results indicated that the compound showed higher cytotoxicity after irradiation with green light compared to curcumin. The in silico studies revealed that the inhibition of PTP1B could be caused by allosteric regulation by 34 . An analysis of the reviewed studies allows us to identify how particular changes in structures of discussed compounds impact their biological activity, bioavailability, and stability. – Hydroxyl and methoxyl groups modifications: The esterification at the 3′-hydroxyl group is favored over esterification at the 4′-hydroxyl group in terms of biological activity . Substituents with longer alkyl chains are more effective at the 4′-hydroxyl position compared to the 3′-hydroxyl position . Asymmetric ester derivatives of curcumin should be prioritized for consideration, as some demonstrate higher potency compared to symmetric modifications of the hydroxyl groups . Amino acid derivatives protected with Boc, Fmoc, and Cbz groups are generally ineffective unless these protective groups are replaced with dichloroacetic acid . The introduction of amines or imines has been shown to enhance activity, particularly those featuring pyridine or piperidine rings . – ß-Diketo moiety adjustments: Conversion to a pyrimidone ring enhances EGFR targeting properties . Isoxazole ring exhibits greater potency compared to pyrazoles . Dimers of piperidinone-modified curcumin demonstrate increased efficacy . Conversion to cyclohexanone improves bioavailability and stability . – Ring modification: The electron-withdrawing groups (EWGs), such as nitro (NO 2 ) and chlorine (Cl), correlate with enhanced anticancer activity . 2.2. Glioma Glioblastoma multiforme has a low survival rate due to frequent recurrence and resistance to current treatments, which is largely due to the molecular heterogeneity of gliomas and the tumor microenvironment. Communication between glioma cells, healthy cells, and the immune system promotes cancer progression and resistance to treatment, particularly through the development of glioma stem cells. In addition, factors released by the tumor and environmental influences, such as hypoxia, help cancer cells evade detection by the immune system and promote disease progression. Curcumin inhibits the growth of malignant gliomas by affecting various cellular processes, including proliferation, apoptosis (through downregulation of bcl-2, bcl-xL, and activation of caspases), autophagy, angiogenesis, immunomodulation, as well as invasion and metastasis. In particular, curcumin has been found to selectively target and kill cancer cells while non-cancerous nerve cells such as astrocytes and neurons. In addition, it can trigger autophagy, which is regulated by simultaneous inhibition of the Akt/mTOR/p70S6K pathway and activation of the ERK1/2 pathway. Overall, these findings underscore the anticancer potential of curcumin, as well as its analogs, which may exhibit better activity and bioavailability than curcumin alone . The previously described compound 1 also had potent activities against CNS cancer cell lines, stronger than curcumin. Specifically, the growth inhibition was above 80% for SF-268, SF-539, and U251 and below 50% for SF-295 . Apart from using a pyrimidin-2-one ring in exchange for the diketo group of curcumin, a series of piperidin-4-one analogs was also explored in this matter. Huber et al. synthesized novel C5-curcuminoids to be tested on glioma cell lines and subjected to blood-brain barrier permeation studies . Three of them (compounds 36 – 38 ), with promising properties, are shown in . The C5-curcuminoids exhibited better stability, and the introduction of a ring in the alkyl part of the compound made the structure more rigid. Moreover, the aryl rings were p-substituted by halogen/alkylhalogen or by hydrogen, as that kind of modification could improve cytotoxicity. For 37 , the authors exploited a known motif from lidocaine—a weakly basic nitrogen atom that enhances blood-brain barrier permeation and for 38 , a carboxylic group was introduced to exploit the function of monocarboxylic acid transporters. In the in vitro study, 36 and 38 were the most potent against astrocytoma and neuroblastoma, respectively (IC 50 below 1 nM). Interestingly, the cytotoxic effect did not exponentially rise with increasing a dose of these compounds, which might indicate some saturation of targets. The major concern refers to its toxicity towards healthy cells, which cannot be fully avoided, but curcumin derivative 38 revealed the best selectivity against neuroblastoma compared to kidney cells. The trifluoromethyl substitution present in the chemical structure of 38 was labeled as the most promising compared to methoxy or halogenic substitution. Overall, the most potent compounds were those with monocarboxylic substituted nitrogen in the piperidin-4-one ring. No clear correlation could be found between lipophilicity/solubility and cytotoxicity, but the study provided evidence that some compounds undergo the thia-Michael reaction effect that could increase solubility . In terms of blood-brain barrier (BBB) permeability of 36 – 38 , It was found that 37 , the most insoluble in water of the three, showed the best permeability, which is consistent with the fact that lipophilic substances cross the BBB . Structurally similar curcumin derivatives, including pentafluoro-substituted compounds, were used to target the stem-like phenotype of glioma cells, which is responsible for cancer recurrence . The introduction of the electronegative pentafluorothio group revealed a larger impact on bioactivity than on the fluoro moiety in the rings, as 39 and 40 , especially in terms of antiproliferative and antiangiogenic activities. The cytotoxic effect of 39 was up to ten-fold greater than fluorinated analog 36a . In all tested cells, including U251 and Mz54 glioblastoma cell lines, methylated analog 39 was also more potent than the ethylated one 40 . Both 39 and 40 decreased the sphere-forming capacities of the glioma—stem-like cell sphere cultures, along with IC 50 at nanomolar concentrations. In addition, the novel compounds were more selective toward cancer cells than to endothelial hybrid cell lines (EA.hy926) . In turn, the activity of curcumin-piperidin-4-one derivatives 41 and 42 , differing from curcumin only in the diketo fragment, was evaluated on the LN-18 human glioblastoma cell line . Both 41 and 42 showed greater antiproliferative potential in the cell culture than curcumin. The IC 50 values towards healthy cells were around 2.3 times higher than for LN-18, indicating the selectivity of the compounds toward cancer cells. Besides the cytotoxic effect, the authors found that 41 and 42 had an anti-migratory effect, and 41 additionally presented an anti-invasion effect. The tested analogs caused cell cycle arrest of LN-18 in the SD phase, while for curcumin, the effect was noted in the G2/M phase . A slightly different approach (which was successful for breast cancer) for curcumin derivatization was reported by Shin et al. . The two novel compounds were equipped with a conjugated ring in the alkyl region between the rings and side chain with 18 F. Positron emission tomography imaging of C6 glioma xenografted mice indicated the highest uptake in tumor tissue for 44 , but both tumor-to-blood or to-muscle ratios of 43 and 44 were nearly the same . Further modifications of curcumin, involving linking its hydroxyl groups in the para position with a second-generation polyester dendron, were presented by Landeros et al. . The para -hydroxyl groups were coupled with first-generation polyester dendron leading to compound 45 . This modification did not cause a loss of antioxidant properties, whereas an improvement in solubility was significant. Compound 45 was acting in an antiproliferative manner towards C6 glioblastoma cells at lower concentrations than curcumin and simultaneously was less cytotoxic towards healthy NHDF cell lines. Some differences in its mode of action were also observed, as cell death was caused rather by necrosis or autophagy. As the uptake was compared to curcumin, 45 was internalized in less amount in the first 6 h, but after 24 h, this reversed, and more compound 45 was internalized . Shi et al. synthesized curcumin derivative conjugated with triphenylphosphonium cation through the alkyl chain as a linker— 46 . This approach allowed for its greater accumulation in mitochondria and decrease of thioredoxin reductase (Trx) enzyme activity, especially the isoform Trx2. The inhibition of Trx2 was a contributing factor disturbing redox homeostasis, which led to ROS generation and further activation of caspases and intrinsic apoptosis. In addition, disturbed mitochondrial respiration by reduction by half in basal respiration, and a reduction of ATP production in the presence of 46 was observed. These effects were not noted or were only marginal for the parent curcumin. Among the six types of cancer cell lines tested, the glioma cell line was the most sensitive. The continuation of in vitro research on various glioma cell lines indicated that temozolomide-resistant glioma cell lines are susceptible to 46 . In the final part of the research, an in vivo antitumor activity evaluation was performed on a mouse model, which confirmed a better therapeutic outcome for 46 compared to curcumin . This is consistent with the effects observed for similar derivatives containing triphenylphosphonium cation as the targeting moiety bound to the curcumin scaffold . Another curcumin analog researched in vitro and in vivo on glioma models was 47 (also known as C-150) . In the chemical structure of 47 , one of the hydrogens from between the carbonyl groups was substituted with N-(1-phenylethyl)acrylamide . This led to reduced transcriptional activation of NF-κB and inhibited PKC-alpha kinase, both proteins implicated in gliomas, with seemingly no effect on mTOR or AKT1. The compound 47 was more cytotoxic in at least eight glioma cell lines and had 26 times lower inhibition values of NF-κB compared to curcumin. Moreover, in vivo studies revealed that 47 inhibited the formation of tumors in a special mutant strain of Drosophila and prolonged the median survival time of a rat model with intracerebrally implanted glioblastoma cells . Based on the structure of the currently approved histone deacetylase (HADAC) and molecular modeling, Wang et al. modified one of the aromatic rings to increase curcumin inhibitory potential toward HADAC. The methoxy group was removed, and the hydroxyl group was changed to N-hydroxyacrylamide . Following molecular modeling, the novel compound consists of three major regions : (i) the cap group is exposed to the solvent space and interacts with the rim of the catalytic tunnel; (ii) the metal binding group occupies the catalytic site, and (iii) the carbon linker connects the two parts and interacts with phenylalanine through π–π stacking. Compound 48 revealed greater inhibition potential in vitro against some isoforms of HADAC, but it was a bit lower compared to vorinostat, an FDA-accepted HADAC inhibitor. The IC 50 value was lower for 48 compared to curcumin and higher compared to vorinostat. The derivative was more resistant to metabolism, as the stability was five times higher in human liver microsomes than in curcumin. The in vivo study on mice showed the T1/2 of 3.2 h after oral dosing, with a bioavailability of 40.2%. Blood-brain barrier permeability was low but acceptable, with brain-to-plasma ratios of 0.08–0.23. It was also established that 48 caused cell apoptosis and cell cycle arrest in phase G 2 /M. In vivo comparison to vorinostat in mice with subcutaneously inoculated U87 cell line cells revealed no observable toxicity for either of them, yet 48 inhibited the tumor growth twice as much . The reviewed literature provides insights into how certain structural modifications impact the physicochemical properties, in vivo behavior, and anticancer efficacy of the above-discussed curcumin analogs. Hydroxyl and methoxyl groups modifications: Polyester dendrimeric substitutions enhance both solubility and activity . Triphenylphosphonium cation increases the mitochondrial accumulation of curcumin derivatives . ß-Diketo moiety adjustments: Pyrimidinone modifications result in improved anticancer activity . The addition of N-phenyl amides and carboxylic acids enhances both blood-brain barrier permeability and antiglioma activity . N-Methyl substituted piperidinones exhibit greater efficacy than N-ethyl derivatives , with non-substituted variants being more effective than substituted ones . Ring modifications: Trifluoromethoxy substitution at the 4′ position significantly increases cytotoxicity compared to hydrogen, chlorine, and fluorine substitutions . Pentafluorothio substituents at the 4′ position demonstrate greater efficacy than 2′-fluorine substituents . Asymmetric curcumin derivatives, featuring one unchanged ring and one phenyl ring with p-substituted N-hydroxyacrylamide, effectively inhibit histone deacetylase . 2.3. Pancreatic Cancer Pancreatic cancer ranks as one of the leading causes of cancer-related deaths globally, with its incidence more than doubling over the past 25 years. The most affected regions include North America, Europe, and Australia. While this rise is largely driven by an aging global population, several modifiable risk factors—such as smoking, obesity, diabetes, and alcohol use—significantly contribute to the disease. The increasing prevalence of these risk factors in many parts of the world is causing a rise in the age-adjusted incidence rates of pancreatic cancer. However, the extent to which these risk factors contribute varies across different regions due to differences in their prevalence and the effectiveness of prevention strategies. Pancreatic cancer, often referred to as the “silent killer,” poses a significant challenge in cancer treatment. The PI3Kα signaling pathway’s dysregulation in pancreatic cancer has become a focal point for therapeutic strategies. As a result, curcumin derivatives have gained attention as potential PI3Kα inhibitors, offering a promising new approach to developing effective treatments for this aggressive disease . In order to better understand the mechanism of action of known curcumin derivative 49 (also known as HO-3867, ), Hu et al. studied its effect on PANC-1 and BXPC-3, pancreatic cancer cell lines. The antiproliferative activity towards these cell lines was confirmed, and the authors noted the change in the levels of cell apoptosis-related proteins—a decrease in Bcl-2 and procaspase 3 and an increase in the cleaved PARP protein. At the same time, no changes in Bax expression were noticed. The activity of 49 was correlated with an increased level of ROS generation. Moreover, the augmented level of endoplasmic reticulum stress-related proteins was found. The generation of ROS played a major role in cell apoptosis, as the addition of a ROS scavenger abrogated the decrease in Bcl-2 levels and cell apoptosis. The remaining apoptotic effect was correlated with inhibition of P-STAT3, a protein implicated in resistance to inducing cancer cell apoptosis. In this case, the inhibition did not decrease with the addition of ROS scavenger. Taken together, the authors found two independent 49 -mediated apoptosis pathways . The mentioned pentafluorothio analog 41 , synthesized by Linder et al., revealed strong inhibitory potential against Panc-1 with IC 50 in nanomolar concentrations, but the inhibitory potential against kappa B kinase β (IKKβ) is unknown . Xie et al., based on two known derivatives in literature, namely EF24 (later discussed in subchapter 3.2.) and EF31 (compound 53 , ), synthesized a series of analogs ( 50 – 52 ) . The rationale for the study was to obtain compound with strong inhibitory potential against IKKβ, which is a protein involved in pancreatic cancer development and progression . In the study, the piperidin-4-one ring was recognized as playing a pivotal role in the inhibition. Therefore, a series of derivatives substituted with halogen or methoxy group was obtained. It is worth noting that some of them, fluorinated or brominated, showed good inhibitory activity . Another series, represented by 54 , revealed weaker inhibitory potential than the previous one, with the strongest activity noted for derivatives with fluorine and bromine in the ortho positions . Halogen-substituents introduced a stronger inhibitory effect compared to methoxy. The final series was equipped with bulky substituents on the aryl rings. The most potent derivative 55 , substituted with phenoxyethanamine, based on the collected data, could likely be a direct inhibitor of IKKβ. Molecular modeling gave some clues about the nature of the binding. The rings were squeezed between nine hydrophobic amino acids and the dimethylaminoethoxy groups were oriented toward solvent areas. Both the protein and the compound changed conformation to adjust to each other. In the in vitro studies on three pancreatic cancer cells—Panc-1, MiaPaCa-2, and BxPC-3 the inhibitory effect towards IKKβ was reconfirmed, and the antiproliferative potential was measured. As the outcome of the study, 56 was denoted as a potent compound against pancreatic cancer, with lower IC 50 values for almost all the cell lines compared to the parental compounds . Chen et al. investigated 54 , also known as C66 curcuminoid, for its antiproliferative potential towards pancreatic cancer cells. In the study, the authors confirmed, by knocking down corresponding genes, that c-Jun N-terminal kinase plays an important role in the proliferation of pancreatic cancer cell lines. Moreover, the expression of the kinase was related to enhanced activity of pro-inflammatory factors. The inhibition of the kinase phosphorylation by 54 was confirmed in the study and further in vitro antiproliferative, anti-migration, and antimetastatic properties of the compound were evaluated. The IC 50 levels of its activity on Panc-1 and SW1990 were 113.4 and 91.83 μM, respectively . Pignanelli et al. synthesized piperidin-4-one derivatives of curcumin and screened cancer and healthy cells. Two compounds, 55 and 57 , demonstrated some promising properties and were further evaluated. The compounds contributed to intracellular ROS generation and induction of apoptosis. One aspect of the research involved the combinatory action of 56 with piperlongumine on the BxPC-3 pancreatic cancer cell line. The results indicated that 56 acted synergistically with piperlongumine in terms of ROS generation and induction of apoptosis. Importantly, this effect was insignificant in healthy cells . Novel curcumin-derivatized quaternary ammonium salts were considered as another option in the research against pancreatic cancer . One of them, 58 , presented the lowest IC 50 value on MIAPaCa-2, a pancreatic cancer cell line, compared to that on breast cell lines. The idea that underpinned the derivatization was to join Baylis–Hillman reaction bromide allylic products and quaternary ammonium curcuminoids to utilize the advantages of these two compounds. On the other hand, curcumin showed some anticancer activity, but the low stability and solubility in water did not allow further application. On the other hand, quaternary ammonium curcuminoids had good water solubility, but their cytotoxicity was low, exceeding 100 µM. The change of methyl group from the ester moiety of 58 to longer alkyl groups decreased cytotoxicity. The simplification of the N-substituent to allyl or benzyl strongly decreased cytotoxicity. The authors also attempted to replace the quaternary ammonium curcuminoid skeleton with N -methylmorpholine, which again was proven unsuccessful. Compound 58 was further evaluated in vivo on mice, where its safety was confirmed, and MIAPaCa-2 xenograft tumors (growth decreased by 42% or by 57% when used in combination with gemcitabine) . Szebeni et al. synthesized 33 novel curcuminoids and screened them in terms of antiproliferative activity on liver, lung, and pancreas cancer cell lines. Derivative 59 was chosen as the most promising compound. Interestingly, the novel compounds revealed the strongest effect on the PANC-1 cell line. Derivatives that were equipped with carboxylic, dihydroxyphenyl, or para -hydroxyl substituents in the carboxy side of the amide moiety were listed as non-active, while analogical structures with chloroacetamidomethyl moiety and with hydroxyl or methoxyl groups in the rings also had good activity, all of which was in accordance with SAR studies performed by the authors. Curcumin derivative 59 was found to accumulate in the endoplasmic reticulum and induced endoplasmic reticulum stress, which was studied by an up-regulation of genes helping to counteract the stress effect. In comparison to curcumin, the up-regulation was stronger. Furthermore, mitochondrial membrane depolarization and induction of apoptosis were noted. An interesting effect of curcumin and its cyclohexanone analogs was discovered accidentally by Revalde et al. . During optimization of the liposomal formulation of gemcitabine with curcumin, the authors found that the interaction of these two compounds was antagonistic. Further evaluation on MIA PaCa-2 and PANC-1 cells showed that the curcuminoids inhibited equilibrative nucleoside transporter 1 at concentrations between 2–20 μM and blocked the accumulation of gemcitabine and uridine. The authors concluded that curcumin is unlikely to inhibit gemcitabine uptake in tumors but might have an impact on gastrointestinal absorption. Interestingly, the EF24 (compound 93 , Figure 15) analog did not show this effect. In addition, only co-treatment caused this effect, and the sequential administration did not affect the activity of the transporter . Based on the structure-activity relationship analysis of curcumin derivatives presented in the works cited above, the following conclusions can be drawn. – ß-Diketo moiety alterations: Pyrimidinone modifications not only enhance anticancer activity but also contribute to reactive oxygen species generation . The piperidin-4-one ring plays a crucial role in inhibiting IKKβ kinase . Derivatives in which acidic hydrogens are replaced with methylamine and further substituted with groups such as carboxylic acid, dihydroxyphenyl, or parahydroxylphenyl exhibit decreased cytotoxicity, while amide formation from 2-chloroacetate increases cytotoxicity . – Ring modifications: Five-membered heterocycles containing oxygen, nitrogen, or sulfur reduce IKKβ kinase inhibition, even when the rings are methyl-substituted . Halogenated rings, particularly those with fluorine or bromine, enhance IKKβ kinase inhibition . The pentafluorothio- substitution generates strong inhibitory potential against Panc-1 cells, though its impact on IKKβ kinase remains unknown . Alkylamine substituents are recommended for consideration when designing IKKβ kinase inhibitors . Quaternary ammonium curcuminoids, despite their good solubility, exhibit low cytotoxicity . 2.4. Other Cancers This subchapter briefly summarizes recent studies on the potential efficacy and mechanisms of action of curcumin and its analogs against five common cancers, such as colorectal, kidney, lung, and bladder cancer, as well as leukemia. Colorectal cancer, the third most common cancer worldwide, begins in the large intestine and can spread to the lower gastrointestinal tract. Its development is influenced by genetic mutations, lifestyle factors, and poor diet. Colorectal cancer carcinogenesis involves three key mechanisms: chromosomal instability, CpG island methylator phenotype, and microsatellite instability. Among environmental factors, dietary habits leading to obesity and high energy intake are important risk factors for colorectal cancer. The treatment of colorectal cancer (CRC) is influenced by several factors, such as the stage of the disease, tumor location, and the patient’s overall health. Standard treatments include surgery, chemotherapy, radiation therapy, and immunotherapy, with surgical removal of the tumor (via open or laparoscopic methods) being the primary treatment approach. These therapies are generally more effective when the cancer is diagnosed early, offering around a 90% survival rate. However, late-stage detection leads to a poorer prognosis, with survival rates dropping to approximately 15% in stage IV, highlighting the need for better detection methods and more effective treatments. Currently, chemotherapy remains the cornerstone of CRC treatment, particularly in metastatic cases. Treatment often involves combinations of fluoropyrimidines, such as 5-fluorouracil (5-FU) or capecitabine (CAPE), alongside irinotecan (IRI) or oxaliplatin (OXA), sometimes supplemented with cetuximab for patients with wild-type (wt) RAS or bevacizumab (BEVA). However, overall survival (OS) rates beyond five years are still under 15%. Curcumin, a natural compound, has been extensively studied as a potential therapeutic agent for CRC due to its demonstrated anticancer effects in both in vitro and in vivo models, as well as its low toxicity profile. Curcumin promotes cancer cell death by increasing ROS generation, which activates both intrinsic and extrinsic apoptotic pathways. Research suggests that curcumin enhances the expression of pro-apoptotic proteins like Bax and Bak while inhibiting anti-apoptotic proteins regulated by NF-κB, such as Bcl-2, Bcl-xL, XIAP, and Survivin. This leads to cytochrome c release and the induction of apoptosis. Curcumin’s anti-inflammatory and anticancer properties are largely attributed to its ability to inhibit the NF-κB pathway, which is frequently overactive in CRC and contributes to drug resistance. Beyond its role in apoptosis, curcumin also hinders CRC cell proliferation by influencing cell cycle regulation, inducing arrest at either the G0/G1 or G2/M phases. This is achieved by upregulating cyclin-dependent kinase (CDK) inhibitors like p16, p21, and p27, while inhibiting CDK2, CDK4, cyclin B, E, and D1. Additionally, curcumin reduces proliferation by suppressing cyclooxygenase-2 (COX-2) expression through the NF-κB pathway and modulating AMP-activated protein kinase (AMPK)-AKT signaling. It also affects the Wnt/β-catenin and Notch pathways, which are frequently altered in CRC cases. Despite its promising effects, curcumin’s low bioavailability and unknown interactions with other anticancer drugs limit its therapeutic potential, prompting the search for curcumin analogs with improved physicochemical properties for CRC treatment . Zhang et al. synthesized two series of curcumin derivatives in order to develop a potent chymotrypsin-like subunit of 20S proteasome, which is one of the regulators of basic cellular processes and pathways, including cell cycle, apoptosis, or DNA repair. All the obtained derivatives showed inhibition of the HCT116 (human colorectal carcinoma) cell line. Among them, 60 showed excellent inhibition of the chymotrypsin-like activity of the 20S proteasome (IC 50 = 0.835 µM), with simultaneously only a negligible effect on the other subunits—trypsin-like and peptidylglutamyl-peptide hydrolyzing . Kidney cancer, also known as renal cell carcinoma (RCC), constitutes a group of malignant tumors originating from the epithelium of the renal tubules. RCC is the most common malignant tumor of the kidney, accounting for about 90% of cases. Histologically, about 80% of these tumors are clear cell carcinomas. Kidney cancer ranks as the 15th most frequently diagnosed cancer globally, with a notably higher occurrence in developed countries. The causes of RCC are not precisely known, but risk factors include genetic predisposition (e.g., von Hippel-Lindau syndrome), smoking, hypertension, obesity (especially in women), exposure to certain chemicals, and end-stage renal failure requiring dialysis. RCC accounts for 2–3% of all malignancies, occurring most often in people aged 60–70. Men have the disease 1.5 times more often than women. About 270,000 new cases of RCC are diagnosed annually worldwide, 116,000 of which end in death. The highest incidence rates are seen in Europe, North America, and Australia, and the lowest in Asia and Africa. In Europe, the incidence is 14.5 per 100,000 men and 6.9 per 100,000 women . Curcumin and its derivatives were found to be involved in several pathways associated with the induction of renal cancer cell death and their activity. One of them concerns the modulation of ROS levels. Curcumin and its analog EF24 (later discussed in subchapter 3.2.) were able to increase the activity of peroxidase and, in this way, decrease intracellular ROS levels. Further, this effect led to a decrease in renal cancer cell migration by inhibition of collagenases/gelatinases activity . Chong et al. also reported on the suppression of one of the collagenases by curcumin. This effect of curcumin was combined with the inhibition of transcription factor—E Twenty-Six-1 and impacted by downregulation of the expression of the vascular endothelial cadherin. Both effects resulted in the impairment of the tumor’s ability to ensure blood support by vasculogenic mimicry. Other studies reported the resensitization of renal cell carcinoma when curcumin was used in combination with well-established anticancer drugs. Xu et al. tested combinatory therapy of curcumin and sunitinib and noticed a reversal of sunitinib resistance. The effect was associated with curcumin’s ability to upregulate the ADAMTS18 gene and associated with the induction of ferroptosis . Another study by Xu et al. also indicates that curcumin upregulates the ADAMTS18 gene. However, an additional effect of curcumin was noted, namely upregulation of miR-148 expression. Both effects were associated with the suppression of autophagy and a positive feedback loop between the two genes was proposed . Further study by Xu et al. revealed that the ADAMTS18 gene was upregulated through the downregulation of its methylation by AKT and NF-κB signaling pathway. Obaidi et al. found that curcumin reverses kidney cancer cells’ resistance toward TRAIL (tumor necrosis factor (TNF)-related apoptosis-inducing ligand). This was attributed to curcumin’s deregulation of miRNA expression associated with apoptosis regulation, especially let-7C. This contributed to the downregulation of the expression of cell cycle protein, and furthermore, two key glycolysis-regulating proteins were also found to a lesser extent . Chang et al. also found that curcumin had an impact on miRNA expression, which caused cancer cell death. It was also found that treatment of SK-NEP-1 cells with curcumin led to increased expression of miR-192-5p, whose native role is to downregulate the expression of PI3K and AKT proteins. Leukemia ranks as the 13th most common cancer and the 10th leading cause of cancer-related deaths globally, with over 487,000 new cases and 305,000 deaths estimated in 2022. The highest incidence rates are observed in Australia/New Zealand (with Australia having the highest rates among men worldwide), Northern America, and various regions of Europe (with Belgium leading among women). The incidence of leukemia is two to three times higher in developed countries compared to developing ones for both men and women, although mortality rates are similar, particularly among women. Leukemia encompasses a diverse group of hematopoietic cancers with distinct biological subtypes, generally classified into four major categories, each with varied causes, including genetic factors, infections, and enhanced diagnostic capabilities. Acute lymphoblastic leukemia (ALL) is more prevalent in children and exhibits a bimodal pattern, with higher incidence rates in Latin American and Asian countries. Acute myeloid leukemia (AML) is more common in adults but also affects children, with higher incidence rates in countries with a higher Human Development Index (HDI). Chronic lymphoid leukemia (CLL) is more frequent among the elderly and males, with elevated rates in North America, Oceania, and parts of Europe. In contrast, chronic myeloid leukemia (CML) is more commonly observed in adult males in higher HDI countries . To assess the efficacy of curcumin in leukemia treatment, a series of studies, both in vitro and in vivo, have been conducted. The literature contains numerous reports on the potential anticancer activity of curcumin as well as its derivatives against various types of leukemia (such as AML, CML, CLL, and ALL). The activities of curcumin and its derivatives were related, among others, to inducing apoptosis, inhibiting proliferation, ROS production, or stimulating autophagy. The mechanism of anticancer action is multidirectional and includes multiple cellular and molecular targets and pathways. The main molecular targets of curcumin in leukemia cancer cells include receptors (e.g., DR-4, DR-5), transcription factors (e.g., Notch-1, NF-κB, STAT3 and 5), kinases (e.g., ERK, JAK), growth factors (e.g., VEGF), inflammatory cytokines (e.g., IL), and others (e.g., HSP-90, Bcl) . Due to the low potency of curcumin, higher doses are required to achieve a therapeutic response in leukemia, which increases the risk of adverse effects and reduces patient compliance. To address these limitations, various derivatives have been synthesized, and combination therapies have been explored. An example would be the combination of curcumin with plant compounds such as quercetin or cannabidiol with quercetin or with chemotherapeutics like thalidomide or imatinib . In all cases, the co-administration of two or more compounds was shown to enhance the effectiveness of therapy compared to monotherapy. Except for the last one, all mentioned research involved in vitro or in vivo tests on mice models. The final study was a randomized controlled trial conducted on fifty CML patients, who were treated for 6 weeks with imatinib alone (800 mg per day) or imatinib and curcumin (800 mg per day and 15 g per day, respectively). A significant decrease in plasma NO levels and better hematological response and tolerance after a combination of imatinib and curcumin therapy, as compared to imatinib therapy alone, was demonstrated. Based on the studies, the authors concluded that curcumin can be used as an adjuvant to imatinib therapy due to its prominent anti-neoplastic activity . In the context of combined therapies, Zhang et al. presented intriguing findings. They demonstrated an interaction between curcumin and interferon signaling pathways, which could potentially provide the theoretical basis for a curcumin-interferon combination in anticancer therapies. Curcumin has been shown to induce the expression of interferon regulatory genes, particularly IFIT2, in U937 leukemia cells. The regulation of IFIT2 by exogenous expression or IFNγ treatment in K562 cells increased cell apoptosis and enhanced the anticancer effects of curcumin. Conversely, shRNA-mediated IFIT2 knockout inhibited curcumin-induced apoptosis in U937 cells. These findings open up new avenues for research and potential future combined treatments. Among modified curcumin derivatives, it is worth mentioning a compound named C817 ( 73 , ), which was tested in vitro on wild-type (WT) and imatinib-resistant mutant Abl kinases, as well as in imatinib-sensitive and resistant CML cells. Compound 73 acted as a potent inhibitor of both WT and mutant Abl kinases, effectively blocking proliferation in vitro. Moreover, it was shown that this derivative could eradicate human leukemia progenitor/stem cells. Thus, this compound might be potentially considered for the treatment of CML patients with Bcr-Abl kinase domain mutations that confer resistance to imatinib . Another curcumin analog that also revealed activity on CML K562 cells and AML HL60 is a compound named C212 ( 74 ) . It induced apoptosis and cell cycle arrest at the G2/M phase, which inhibited the growing leukemia cells at a higher efficacy than curcumin. Analog 74 was also responsible for the removal of quiescent leukemia cells, resistant to conventional treatments, which is key to preventing leukemia relapse. Its activity’s mechanism was attributed to inhibiting Hsp90, similarly as it was noted for compound C1205—derivative 75 presented in . Both analogs, 74 and 75 , demonstrated greater activity in Hsp90 inhibition and antitumor effects compared to curcumin. Compound 75 reacted with Hsp90 and degraded protein in both imatinib-sensitive K562 CML cells and imatinib-resistant K562/G01 CML cells. It also suppressed Akt, MEK, ERK, and C-RAF. The result was a significant inhibition of proliferation and induction of apoptosis in K562 and K562/G01 cells. In another study, curcumin and its derivative, named CD2066 ( 76 , ), exhibited antiviability effects in aggressive T-cell ALL in nanomolar or micromolar concentration, respectively . Both compounds interfered with Notch signaling activity (downregulation), promoted DNA damage, and induced an antiproliferative effect. Among curcumin and seventeen curcumin derivatives, compound 76 was identified as the most active anti-leukemic drug candidate. Nakamae et al. studied the role of ROS upregulation in tumor suppression. In the study, thirty-nine novel curcumin derivatives were synthesized, and their anti-proliferative and anti-tumorigenic properties were examined. All derivatives exhibited anti-proliferative activity toward human cancer cell lines, including CML-derived K562 leukemic cells in a manner sensitive to an antioxidant, N-acetyl-cysteine. C7-Curcuminoids, 61 , and its demethylated analog 62 were synthesized via aldol condensation of 2,4-pentanedione with 3,5-dimethoxybenzaldehyde . All the C5-curcuminoids were synthesized via the condensation of 1-aryl-1,3-butanediones prepared by the Claisen condensation of the corresponding acetophenone derivatives with ethyl acetate, with aromatic aldehydes. To investigate novel curcumin derivatives’ anti-proliferative activity on human tumor cells, the authors cultured K562 cells in the absence and presence of the representative compounds (50 μM) in vitro. All the compounds showed growth inhibitory activity, but in different degrees, dead cells’ induction varied from one compound to another. Next, the researchers determined and compared the GI 50 of all curcumin derivatives using K562 cells. Compounds ( 61 – 72 ) exhibited a GI 50 lower than that of curcumin in this assay in vitro. The growth inhibitory effect of curcumin derivatives was not restricted to K562 leukemic cells; other types of human cancer cell lines, including U-87 MG glioblastoma, HeLa cervical cancer, MCF-7 breast adenocarcinoma, AN3CA uterine cancer, MIA PaCa-2 and PANC-1 pancreatic cancer, and 293T human embryonic kidney cells, were sensitive to the inhibition of growth by curcumin derivatives. Suppression of the tumorigenic cell growth of human cancer cells (K562 leukemic cells) in a xenograft mouse model was also studied with curcumin derivatives. Compounds: 62 , 65 – 67 , and 69 significantly reduced the size of tumors, yet the effect was not as pronounced as for curcumin itself. Notably, curcumin and its derivatives did not induce any obvious adverse effects in the normal lineage of cells under the conditions in which curcumin and a group of curcumin derivatives sufficiently inhibit tumor cell growth in vivo, as well as no toxic effects were observed in mice . The first evidence of an increased incidence of lung cancer was noted among miners and other occupational groups during the 19th century. In the first half of the 20th century, an epidemic rise in lung cancer cases was observed. Today, lung cancer is the most common malignancy among men in many countries and remains the leading cause of cancer-related deaths worldwide. Histologically and biologically, lung cancer is a highly complex neoplasm. While sequential premalignant lesions have been well-defined for centrally arising squamous cell carcinomas, they are less well-documented for other major subtypes, including small-cell lung cancer and adenocarcinoma. The three main morphological types of premalignant lesions identified in the lung are squamous dysplasia, atypical adenomatous hyperplasia, and diffuse idiopathic pulmonary neuroendocrine cell hyperplasia . Studies on the pharmacokinetics and bioavailability of curcumin have shown that although the substance is safe and well tolerated even in very high doses, its bioavailability is limited by poor absorption and rapid elimination from the body. According to FAO/WHO recommendations, the maximum daily intake of curcumin is 0–1 mg/kg body weight, which does not cause any adverse health effects. Wahlstrom and Blennow’s pioneering 1978 study on rats showed that curcumin is mainly excreted unchanged in feces after oral and intraperitoneal administration. Some curcumin was found to appear in the bile after intravenous administration, with the main metabolites being the glucuronides tetrahydrocurcumin (THC) and hexahydrocurcumin. These studies have also shown that curcumin accumulates in the intestines and liver but is only found in trace amounts in the brain. Subsequent animal and human studies have confirmed these results, showing low levels of curcumin in the blood after oral administration. In clinical trials in cancer patients given curcumin in high doses, blood levels of the substance were low, while it reached higher levels in intestinal tissues and liver. Despite numerous studies on curcumin’s safety and efficacy, its poor bioavailability limits its use as a therapeutic agent. To overcome these limitations, researchers are testing various methods to increase curcumin’s bioavailability. These include adjuvants that block metabolic pathways, nanocurcumin, liposomes, micelles, and structural modifications such as isomerization. Innovative approaches, such as polymeric nanoparticles, have shown increased efficacy in delivering curcumin to the body. Curcumin analogs improved bioavailability and stronger anti-inflammatory and anti-tumor effects. Despite these promising results, further research is still needed on curcumin’s bioavailability and metabolism, as well as its therapeutic application. Long-term studies on its effectiveness in treating cancer and other conditions are ongoing at various research centers around the world Gyuris et al. performed in vitro cytotoxicity assays with two different lung cancer cell lines (A549 and H1975) to evaluate the new derivatives’ anticancer activities (46 compounds, divided into three series, ) . The effects of the most potent analogs (most notably 77 ) were tested against subcutaneously implanted human lung cancer (A549) in the SCID mouse xenograft model, showing significantly reduced tumor growth. Curcumin and its derivatives are also being evaluated for their potential use in the treatment of bladder cancer, a common malignancy of the urinary system arising in the tissues of the urinary bladder. Urothelial carcinoma is the most common type of bladder cancer, accounting for more than 90% of cases in industrialized countries. It is particularly common among the elderly, and risk factors include smoking, exposure to chemicals and chronic cystitis. Treatment primarily involves transurethral resection and intravesical infusion of chemotherapy, but may also include laser ablation, Bacillus Calmette-Guerin bladder treatment, radiation therapy, chemotherapy or surgical removal of part or all of the bladder. Natural curcuminoid mixture (curcumin, demethoxycurcumin and bisdemethoxycurcumin) presents activity towards bladder cancer as it inhibits cell proliferation and migration, while promoting apoptosis through the suppression of MMP signaling pathways . Curcumin itself was also found to reduce bladder cancer tumor growth in animal models . One potential approach to increasing biological activity is the introduction of fluorine atoms into a molecule. Based on this principle, our group synthesized a series of fourteen curcumin fluoro-analogs . These compounds were tested in vitro against bladder cancer cell lines 5637 and SCaBER. The study showed that the presence of the BF 2 group at the diketone fragment is crucial for cytotoxic activity, as compounds with an unaltered 3,5-diketone unit showed significantly lower activity against the bladder cancer cell lines. Additionally, curcumin-BF 2 adducts with a methoxyl group demonstrated higher activity compared to those with hydroxyl or fluorine groups, and compounds with a single fluorine atom were more effective than those with two fluorine atoms. It was also found that the distribution of substituents in the benzene ring significantly affects the anticancer activity of curcumin analogs. Among the synthesized BF 2 adducts, derivatives with 3-fluoro-4-methoxyphenyl group proved to be the most cytotoxic. Compound 78 exhibited IC 50 values of 6.49 μM and 3.31 μM for the 5637 and SCaBER cell lines, respectively, after a 24-hour incubation, demonstrating superior efficacy compared to curcumin. The introduction of a BF 2 moiety to the carbonyl groups was also applied by Lazewski et al. . The authors obtained a series of curcumin derivatives by substituting the phenolic groups with poly(ethylene glycol) (PEG) chains and adding a BF 2 moiety to the carbonyl groups. The compounds were tested for their cytotoxic activity against two bladder cancer cell lines, 5637 and SCaBER. Cell viability was analyzed under normoxic and hypoxic conditions (1% oxygen). The study showed that in a concentration-dependent manner, PEGylated curcumin inhibited the cell cycle in the G2/M phase and induced the expression of proteins involved in cell cycle regulation, cell proliferation and response to hypoxic conditions. Compound 79 under hypoxia, but not normoxia, increased the expression of stress-related proteins associated with c-Jun N-terminal kinase signaling, angiogenesis, ECM patterning and the p21 signaling pathway. To improve the pharmacokinetic properties and enhance the biological effects of curcumin, Bakun et al. synthesized and characterized a series of 30 compounds inspired by its structure, which were evaleated towards bladder cancer cell lines 5637 and SCaBER. Compound 80 was proven to have the best activity, showing IC 50 values of 1.2 μM and 2.2 μM against 5637 and SCaBER cell lines, respectively, after 24 hours. Analysis of the structure-activity relationship of the most active compounds showed that symmetric curcuminoids exhibited higher anti-tumor activity compared to quasi-curcuminoids. Moreover, modification of curcumin’s β-diketone moiety with the BF 2 grouping significantly enhanced its cytotoxic activity. Summarizing the above-discussed publications, one can draw conclusions about how specific structural modifications concerning the hydroxyl and methoxyl groups or ß-diketo moiety influence anticancer activity. – Hydroxyl and methoxyl groups: Modifications in methylation patterns, such as the addition of a third hydroxyl or methoxyl group, as well as complete methylation or demethylation, did not result in enhanced activity against CML-derived K562 leukemic cells . Curcumin-BF 2 adducts containing a methoxyl group exhibited greater activity against bladder cancer cells (5637 and SCaBER) than those with hydroxyl or fluorine groups . – ß-Diketo moiety: Pyrimidinone derivatives demonstrated improved anticancer activity, and the incorporation of bulky N-substitutions should be considered when designing proteasome inhibitors derived from curcumin . Shortening the alkyl chain did not lead to increased activity against CML-derived K562 leukemic cells . Complexation of diketone moiety with a BF 2 group increased cytotoxicity against SCaBER and 5637—bladder cancer lines . In 2022, there were an estimated 20 million new cases worldwide and 9.7 million cancer deaths. For women, breast cancer alone is predicted to contribute to 31% of female cancer cases in 2023 . Breast cancer is clinically categorized into five subtypes depending on the expression of estrogen receptors (ER), progesterone receptors (PR), and the human epidermal growth factor receptor 2 (HER2) oncogene. Tumors that express ER and/or PR are classified as receptor-positive breast cancers, while those lacking ER, PR, and HER2 expression are referred to as triple-negative breast cancers (TNBC). Presently, the primary treatment options for breast cancer include chemotherapy, endocrine therapy, oligo-small molecule inhibitor therapy, and surgical removal of the tumor . Curcumin and its derivatives have demonstrated significant efficacy against various cancers. Evidence from in vivo and in vitro studies indicates that curcumin exhibits breast anticancer properties through numerous mechanisms, including induction of cell cycle arrest and apoptosis, modulation of relevant signaling pathways and gene expression, inhibition of tumor cell proliferation, suppression of metastasis, and prevention of angiogenesis. Detailed documentation has shown that the main targets and signaling pathways interacting with curcumin include: nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB), p53 protein (p53), vascular endothelial growth factor (VEGF), ROS, PI3K/AKT/mTOR pathway, protein kinase B, Wnt/β-catenin, JAK/STAT signaling pathway, ER, HER2, and microRNA. As mentioned before, the clinical use of curcumin and its efficacy is limited due to its unfavorable physicochemical properties despite such promising effects. This chapter reviews recent advances in research on the synthesis of curcumin derivatives, focusing on their action in breast cancer therapy. Afzal et al. condensed phenyl urea group with two carbonyl groups of curcumin. The authors obtained three pyrimidinone analogs, among which 1 , visualized in , revealed the highest inhibitory activity towards MCF7 (breast cancer cell line). This effect could be assigned to an affinity of 1 to the active site of the epidermal growth factor receptor (EGFR). The phenomenon was further examined by molecular docking studies and led to the observation that compound 1 had the strongest binding affinity to EGFR among the studied compounds, with three different types of interactions. Data on growth inhibitory potential (at the concentration of 10 µM) was collected from nine different types of cancers and 59 cell lines, six of which were breast cancer cell lines. The mean growth inhibitory potential from these six lines was calculated as 75%, which is an improvement compared to curcumin growth inhibitory potential—56% . A pyrazole derivative of curcumin 2 revealed lower mean growth inhibitory percentage points against the same six types of breast cancer cell lines compared to compound 1 . Structurally similar compounds were studied by Rodrigues et al. , who assessed four five-membered heterocyclic derivatives of curcumin 3 – 6 (see ) using in silico and in vitro studies on the MCF7 cell line. The retention of characteristic curcumin scaffold, namely the carbonyl chain and the aryl side chain, and a modification of β-diketone moiety played a fundamental role in improving the biological properties. Curcuminoids 3 and 6 were less potent than curcumin, based on IC 50 values. Although the substituted pyrazole derivative 5 presented a satisfactory IC 50 value, the compound was less soluble and tended to precipitate. The most potent derivative was 4 , with an IC 50 value lower than that noted for curcumin but higher than that of 5 . In contrast to 5 , derivative 3 did not reveal any physicochemical shortcomings. Additionally, in silico calculations showed that the absorption from the gastrointestinal tract would be the highest for 3 , and the compound would have a good binding affinity to key proteins that play a role in cancer progression. All things considered, the isoxazole analog was identified as a promising lead structure for further evaluation . Panda et al. esterified curcumin using amino acids and screened them for anticancer, antimicrobial, anti-inflammatory, and analgesic properties (compounds 7 – 10 in ). The novel conjugates revealed a promising effect on the MCF7 cells (IC 50 values between 9.15 and 11.52 µM), more profound than towards lung and prostate cancer cell lines. Interestingly, analogs with protected amines exhibited IC 50 values exceeding 100 µM . Panda et al. continued their work on esterified curcumin derivatives and, in another article, reported on dichloroacetic derivatives of curcumin ( 11 and 12 in ) conjugated directly via the ester bond or an amino acid linker—glycine, L/β-alanine, L-phenylalanine, or γ-aminobutyric acid. Dichloroacetic acid is a potent anticancer agent that suffers from worrisome toxicity. A total of six novel compounds were used in a clonogenic survival assay, which showed the suppression of the proliferation of T-47D and MDA-MB-231 breast cancer cell lines but not the healthy MCF10A epithelial cells from the human mammary gland. The activity of the compounds was about 8–16 times greater towards cancer cell lines, with EC 50 values up to a nanomolar level at 424 and 778 nM for 11 and 12 , respectively. Further study of 11 in the mouse mammary tumor model showed significantly reduced tumor volume gain compared to the control group and dichloroacetic acid alone. Moreover, no increased systemic toxicity was observed as the body weight, organ histology, and blood parameters were optimal. Finally, the in silico studies predicted that compound 11 would have a better inhibitory affinity towards DYRK2 (a protein that promotes proliferation) and lower towards hERG (inhibition causes cardiac-related disorders), would be weaker metabolized by CYP2D6 and would not be a P-glycoprotein−P-gp (an efflux pump) substrate . Different kinds of derivatives were obtained by Hsieh et al. , of which compounds 13 and 14 were mainly assessed. Both compounds revealed better activity than curcumin. It’s worth noting that hydroxyl groups of curcumin were substituted with dihydroxyacids in a series of reactions to yield 13 . As compared to curcumin, the novel analog showed good stability, higher hydrophilicity, and solubility in water and alcohol, indicating better potential. It was further examined both in vivo and in vitro on mice models and the MDA-MB-231 breast cancer cell line, respectively. The calculated IC 50 value was 6.1 times lower against MDA-MB-231 than curcumin, and the value was also lower than that obtained for monosubstituted derivative. However, it was slightly higher than that observed for the analog with a longer alkyl chain—bis(hydroxymethyl)butanoic acid. Moreover, swapping the position between ether and ester in the aromatic rings increased the IC 50 value. Interestingly, elongation of the alkyl group in the swapped ether-ester compound resulted in a decrease of the inhibitory potential, which was a different result than in the non-swapped ether-ester compound. However, all mentioned derivatives in this paragraph were still more potent than the parent curcumin. The effect of tautomerization between keto and enol was also studied. The authors substituted the methylene group in between the carbonyls with two methyl groups—preventing the formation of enol form and concluded that the novel derivative had a lower IC 50 value. Further evaluation of compound 13 proved a synergistic effect with doxorubicin against the doxorubicin-resistant MDA-MB-231 cancer cell line. Furthermore, administration of 13 to the MDA-MB-231 xenograft nude mice model reduced the tumor size by 60%, whereas in combination with doxorubicin by 80% of that of the control group. Additionally, there was no difference in body weight, behavior, and blood chemistry between treated and untreated mice . The effect could be attributed to G2/M phase arrest, apoptosis, and autophagy of the treated cancer cells, as evaluated in a following study . Furthermore, curcumin derivative 13 decreased invasiveness by MDA-MB-231 cells in concentrations below 5 µM by inhibiting the secretion of proteins that cause the degradation of gelatin and collagens, as well as inhibiting the MAPK/ERK/AKT signaling pathway . Curcumin dimers are known to be more stable and to have better inhibitory potential towards cancer cells. Moreover, curcumin-piperidone derivatives also show better antiproliferative potential on cancer cells. Therefore, a combination of the two modifications was a driving force for studies by Koroth et al. and Nirgude et al. . Two interesting dimers ( 15 and 16 ) are shown in A. Those compounds bear chloro- or nitro- substituents in the aromatic rings, and the hydrocarbon chain between the aromatic rings is shorter. The authors stated that adding an electron-withdrawing substituent (-NO 2 , -Cl) to the parent structure enhanced the antiproliferative potential. Compound 15 was tested in vitro against the MCF7 and metastatic MDA-MB-231 breast cancer cell lines. The effective dose was at the nanomolar lever, namely 54 and 127 nM for MDA-MB-231 and MCF7, respectively. The derivative 15 was 100 times more potent than curcumin and revealed better potential against less differentiated and more metastatic cancer cells. At the same time, it did not show any cytotoxicity against peripheral blood mononuclear cells in concentrations of up to 150 nM. The mechanism of action was via the activation of the intrinsic pathway of apoptosis. In addition, the migration capacity rate of MDA-MB-231 was diminished, and the effect was attributed to the downregulation of the expression of matrix metalloproteinase 1 . Research on compound 15 was continued in subsequent in vivo studies on EAC mice tumor allografts . Compound 15 was effective and demonstrated a synergistic effect with doxorubicin, cisplatin, and olaparib. Simultaneously, derivative 15 was found to be safe as it did not cause any histopathological or body mass changes as compared to the control group. Authors found evidence for the pleiotropic action of compound 15 ; there were found 74 and 114 changes in miRNA and mRNA expressions, respectively. The authors also described a unique miRNA-mRNA interaction network, which indicated an impact on the regulation targets of NF-κB . Analog 16 was also evaluated and revealed similar biological properties to 15 with IC 50 at 31 nM against MDA-MB-231 . Other modifications described in the literature include replacing the β-diketone group with cyclohexanone (which increases the stability and bioavailability) and substituting some hydrogens in the aromatic rings with N-alkyl-methanimines or N-alkyl-methanamines . Both the synthesized imino and amino curcumin substituents (compounds 17-32 , B) showed better anticancer potential than curcumin and methotrexate, as the novel analogs had IC 50 values towards MCF7 in the range of 10–300 μg/mL. The substitution of piperidine ( 19 ) did not change IC 50 values as compared to curcumin, but the replacement of the piperidine ring to cyclohexane drastically decreased the IC 50 value ( 18 ). The change to the pyridine ring (compounds 20 – 21 ) also affected IC 50 , with the values slightly above those for compound 18 but still almost 6 times lower than that for curcumin. In addition, the position of the nitrogen atom in the pyridine ring impacted IC 50 , as compound 20 had lower IC 50 values than 21 . Compounds with longer linkers between the heterocycle and imine, 22 – 23 , had lower IC 50 values than those with very rigid structures 20 – 21. Approximately a two-fold decrease of IC 50 was observed for S stereoisomers of 1-phenylethan-imine (compound 25 ) as compared to the R isomers (compound 26 ). Interestingly, the reduction to 1-phenylethan-amine did not change the IC 50 value a lot, but an opposite relationship in reduced imines could be observed—isomer R ( 30 ) had lower IC 50 as compared to S isomer ( 31 ). Differently, for piperidine and pyridine analogs, the reduced compounds (imine to amine) 27 – 29 showed lower IC 50 . Overall, three compounds, 18 , 27 , and 28 were acknowledged as the most potent in the study, with IC 50 values lower than that of methotrexate . Kostrzewa et al. developed structurally similar 4-piperidone ring-fused curcumins those exhibited antioxidant or ROS-generating properties, which induced PTP1B enzyme degradation (compounds 33 – 35 , C) . The introduction of a nitrogen atom and protection of the hydroxyl group by acetyl groups in curcumin were aimed to increase the cytotoxicity effect and reduce metabolism, respectively. In the nitro blue tetrazolium test, 35 showed the best antioxidant properties, whereas in the in vitro cytotoxicity test on MCF-7 and MDA-MB-231 cell lines, 33 and 34 revealed the best IC 50 values and were even better than curcumin. Interestingly, structural isomers 34 and 35 showed different properties, namely the isomer with 4-piperidone moiety closer to the non-substituted aromatic ring 35 revealed better antioxidant properties. Compounds 33 and 34 were further evaluated and both showed similar cytotoxicity towards the MCF-7 cell line, but against the MDA-MB-231 cell line, the 34 showed about two times greater inhibitory potential compared to 33 . Thus, the protection of the hydroxyl group played a role in augmenting the anticancer effect on more malignant cell lines. Both 33 and 34 were found to generate ROS in cancer cell lines but not in the HaCaT cells. Compound 33 was evaluated as a photosensitizer in MDA-MB-231, and the results indicated that the compound showed higher cytotoxicity after irradiation with green light compared to curcumin. The in silico studies revealed that the inhibition of PTP1B could be caused by allosteric regulation by 34 . An analysis of the reviewed studies allows us to identify how particular changes in structures of discussed compounds impact their biological activity, bioavailability, and stability. – Hydroxyl and methoxyl groups modifications: The esterification at the 3′-hydroxyl group is favored over esterification at the 4′-hydroxyl group in terms of biological activity . Substituents with longer alkyl chains are more effective at the 4′-hydroxyl position compared to the 3′-hydroxyl position . Asymmetric ester derivatives of curcumin should be prioritized for consideration, as some demonstrate higher potency compared to symmetric modifications of the hydroxyl groups . Amino acid derivatives protected with Boc, Fmoc, and Cbz groups are generally ineffective unless these protective groups are replaced with dichloroacetic acid . The introduction of amines or imines has been shown to enhance activity, particularly those featuring pyridine or piperidine rings . – ß-Diketo moiety adjustments: Conversion to a pyrimidone ring enhances EGFR targeting properties . Isoxazole ring exhibits greater potency compared to pyrazoles . Dimers of piperidinone-modified curcumin demonstrate increased efficacy . Conversion to cyclohexanone improves bioavailability and stability . – Ring modification: The electron-withdrawing groups (EWGs), such as nitro (NO 2 ) and chlorine (Cl), correlate with enhanced anticancer activity . Glioblastoma multiforme has a low survival rate due to frequent recurrence and resistance to current treatments, which is largely due to the molecular heterogeneity of gliomas and the tumor microenvironment. Communication between glioma cells, healthy cells, and the immune system promotes cancer progression and resistance to treatment, particularly through the development of glioma stem cells. In addition, factors released by the tumor and environmental influences, such as hypoxia, help cancer cells evade detection by the immune system and promote disease progression. Curcumin inhibits the growth of malignant gliomas by affecting various cellular processes, including proliferation, apoptosis (through downregulation of bcl-2, bcl-xL, and activation of caspases), autophagy, angiogenesis, immunomodulation, as well as invasion and metastasis. In particular, curcumin has been found to selectively target and kill cancer cells while non-cancerous nerve cells such as astrocytes and neurons. In addition, it can trigger autophagy, which is regulated by simultaneous inhibition of the Akt/mTOR/p70S6K pathway and activation of the ERK1/2 pathway. Overall, these findings underscore the anticancer potential of curcumin, as well as its analogs, which may exhibit better activity and bioavailability than curcumin alone . The previously described compound 1 also had potent activities against CNS cancer cell lines, stronger than curcumin. Specifically, the growth inhibition was above 80% for SF-268, SF-539, and U251 and below 50% for SF-295 . Apart from using a pyrimidin-2-one ring in exchange for the diketo group of curcumin, a series of piperidin-4-one analogs was also explored in this matter. Huber et al. synthesized novel C5-curcuminoids to be tested on glioma cell lines and subjected to blood-brain barrier permeation studies . Three of them (compounds 36 – 38 ), with promising properties, are shown in . The C5-curcuminoids exhibited better stability, and the introduction of a ring in the alkyl part of the compound made the structure more rigid. Moreover, the aryl rings were p-substituted by halogen/alkylhalogen or by hydrogen, as that kind of modification could improve cytotoxicity. For 37 , the authors exploited a known motif from lidocaine—a weakly basic nitrogen atom that enhances blood-brain barrier permeation and for 38 , a carboxylic group was introduced to exploit the function of monocarboxylic acid transporters. In the in vitro study, 36 and 38 were the most potent against astrocytoma and neuroblastoma, respectively (IC 50 below 1 nM). Interestingly, the cytotoxic effect did not exponentially rise with increasing a dose of these compounds, which might indicate some saturation of targets. The major concern refers to its toxicity towards healthy cells, which cannot be fully avoided, but curcumin derivative 38 revealed the best selectivity against neuroblastoma compared to kidney cells. The trifluoromethyl substitution present in the chemical structure of 38 was labeled as the most promising compared to methoxy or halogenic substitution. Overall, the most potent compounds were those with monocarboxylic substituted nitrogen in the piperidin-4-one ring. No clear correlation could be found between lipophilicity/solubility and cytotoxicity, but the study provided evidence that some compounds undergo the thia-Michael reaction effect that could increase solubility . In terms of blood-brain barrier (BBB) permeability of 36 – 38 , It was found that 37 , the most insoluble in water of the three, showed the best permeability, which is consistent with the fact that lipophilic substances cross the BBB . Structurally similar curcumin derivatives, including pentafluoro-substituted compounds, were used to target the stem-like phenotype of glioma cells, which is responsible for cancer recurrence . The introduction of the electronegative pentafluorothio group revealed a larger impact on bioactivity than on the fluoro moiety in the rings, as 39 and 40 , especially in terms of antiproliferative and antiangiogenic activities. The cytotoxic effect of 39 was up to ten-fold greater than fluorinated analog 36a . In all tested cells, including U251 and Mz54 glioblastoma cell lines, methylated analog 39 was also more potent than the ethylated one 40 . Both 39 and 40 decreased the sphere-forming capacities of the glioma—stem-like cell sphere cultures, along with IC 50 at nanomolar concentrations. In addition, the novel compounds were more selective toward cancer cells than to endothelial hybrid cell lines (EA.hy926) . In turn, the activity of curcumin-piperidin-4-one derivatives 41 and 42 , differing from curcumin only in the diketo fragment, was evaluated on the LN-18 human glioblastoma cell line . Both 41 and 42 showed greater antiproliferative potential in the cell culture than curcumin. The IC 50 values towards healthy cells were around 2.3 times higher than for LN-18, indicating the selectivity of the compounds toward cancer cells. Besides the cytotoxic effect, the authors found that 41 and 42 had an anti-migratory effect, and 41 additionally presented an anti-invasion effect. The tested analogs caused cell cycle arrest of LN-18 in the SD phase, while for curcumin, the effect was noted in the G2/M phase . A slightly different approach (which was successful for breast cancer) for curcumin derivatization was reported by Shin et al. . The two novel compounds were equipped with a conjugated ring in the alkyl region between the rings and side chain with 18 F. Positron emission tomography imaging of C6 glioma xenografted mice indicated the highest uptake in tumor tissue for 44 , but both tumor-to-blood or to-muscle ratios of 43 and 44 were nearly the same . Further modifications of curcumin, involving linking its hydroxyl groups in the para position with a second-generation polyester dendron, were presented by Landeros et al. . The para -hydroxyl groups were coupled with first-generation polyester dendron leading to compound 45 . This modification did not cause a loss of antioxidant properties, whereas an improvement in solubility was significant. Compound 45 was acting in an antiproliferative manner towards C6 glioblastoma cells at lower concentrations than curcumin and simultaneously was less cytotoxic towards healthy NHDF cell lines. Some differences in its mode of action were also observed, as cell death was caused rather by necrosis or autophagy. As the uptake was compared to curcumin, 45 was internalized in less amount in the first 6 h, but after 24 h, this reversed, and more compound 45 was internalized . Shi et al. synthesized curcumin derivative conjugated with triphenylphosphonium cation through the alkyl chain as a linker— 46 . This approach allowed for its greater accumulation in mitochondria and decrease of thioredoxin reductase (Trx) enzyme activity, especially the isoform Trx2. The inhibition of Trx2 was a contributing factor disturbing redox homeostasis, which led to ROS generation and further activation of caspases and intrinsic apoptosis. In addition, disturbed mitochondrial respiration by reduction by half in basal respiration, and a reduction of ATP production in the presence of 46 was observed. These effects were not noted or were only marginal for the parent curcumin. Among the six types of cancer cell lines tested, the glioma cell line was the most sensitive. The continuation of in vitro research on various glioma cell lines indicated that temozolomide-resistant glioma cell lines are susceptible to 46 . In the final part of the research, an in vivo antitumor activity evaluation was performed on a mouse model, which confirmed a better therapeutic outcome for 46 compared to curcumin . This is consistent with the effects observed for similar derivatives containing triphenylphosphonium cation as the targeting moiety bound to the curcumin scaffold . Another curcumin analog researched in vitro and in vivo on glioma models was 47 (also known as C-150) . In the chemical structure of 47 , one of the hydrogens from between the carbonyl groups was substituted with N-(1-phenylethyl)acrylamide . This led to reduced transcriptional activation of NF-κB and inhibited PKC-alpha kinase, both proteins implicated in gliomas, with seemingly no effect on mTOR or AKT1. The compound 47 was more cytotoxic in at least eight glioma cell lines and had 26 times lower inhibition values of NF-κB compared to curcumin. Moreover, in vivo studies revealed that 47 inhibited the formation of tumors in a special mutant strain of Drosophila and prolonged the median survival time of a rat model with intracerebrally implanted glioblastoma cells . Based on the structure of the currently approved histone deacetylase (HADAC) and molecular modeling, Wang et al. modified one of the aromatic rings to increase curcumin inhibitory potential toward HADAC. The methoxy group was removed, and the hydroxyl group was changed to N-hydroxyacrylamide . Following molecular modeling, the novel compound consists of three major regions : (i) the cap group is exposed to the solvent space and interacts with the rim of the catalytic tunnel; (ii) the metal binding group occupies the catalytic site, and (iii) the carbon linker connects the two parts and interacts with phenylalanine through π–π stacking. Compound 48 revealed greater inhibition potential in vitro against some isoforms of HADAC, but it was a bit lower compared to vorinostat, an FDA-accepted HADAC inhibitor. The IC 50 value was lower for 48 compared to curcumin and higher compared to vorinostat. The derivative was more resistant to metabolism, as the stability was five times higher in human liver microsomes than in curcumin. The in vivo study on mice showed the T1/2 of 3.2 h after oral dosing, with a bioavailability of 40.2%. Blood-brain barrier permeability was low but acceptable, with brain-to-plasma ratios of 0.08–0.23. It was also established that 48 caused cell apoptosis and cell cycle arrest in phase G 2 /M. In vivo comparison to vorinostat in mice with subcutaneously inoculated U87 cell line cells revealed no observable toxicity for either of them, yet 48 inhibited the tumor growth twice as much . The reviewed literature provides insights into how certain structural modifications impact the physicochemical properties, in vivo behavior, and anticancer efficacy of the above-discussed curcumin analogs. Hydroxyl and methoxyl groups modifications: Polyester dendrimeric substitutions enhance both solubility and activity . Triphenylphosphonium cation increases the mitochondrial accumulation of curcumin derivatives . ß-Diketo moiety adjustments: Pyrimidinone modifications result in improved anticancer activity . The addition of N-phenyl amides and carboxylic acids enhances both blood-brain barrier permeability and antiglioma activity . N-Methyl substituted piperidinones exhibit greater efficacy than N-ethyl derivatives , with non-substituted variants being more effective than substituted ones . Ring modifications: Trifluoromethoxy substitution at the 4′ position significantly increases cytotoxicity compared to hydrogen, chlorine, and fluorine substitutions . Pentafluorothio substituents at the 4′ position demonstrate greater efficacy than 2′-fluorine substituents . Asymmetric curcumin derivatives, featuring one unchanged ring and one phenyl ring with p-substituted N-hydroxyacrylamide, effectively inhibit histone deacetylase . Pancreatic cancer ranks as one of the leading causes of cancer-related deaths globally, with its incidence more than doubling over the past 25 years. The most affected regions include North America, Europe, and Australia. While this rise is largely driven by an aging global population, several modifiable risk factors—such as smoking, obesity, diabetes, and alcohol use—significantly contribute to the disease. The increasing prevalence of these risk factors in many parts of the world is causing a rise in the age-adjusted incidence rates of pancreatic cancer. However, the extent to which these risk factors contribute varies across different regions due to differences in their prevalence and the effectiveness of prevention strategies. Pancreatic cancer, often referred to as the “silent killer,” poses a significant challenge in cancer treatment. The PI3Kα signaling pathway’s dysregulation in pancreatic cancer has become a focal point for therapeutic strategies. As a result, curcumin derivatives have gained attention as potential PI3Kα inhibitors, offering a promising new approach to developing effective treatments for this aggressive disease . In order to better understand the mechanism of action of known curcumin derivative 49 (also known as HO-3867, ), Hu et al. studied its effect on PANC-1 and BXPC-3, pancreatic cancer cell lines. The antiproliferative activity towards these cell lines was confirmed, and the authors noted the change in the levels of cell apoptosis-related proteins—a decrease in Bcl-2 and procaspase 3 and an increase in the cleaved PARP protein. At the same time, no changes in Bax expression were noticed. The activity of 49 was correlated with an increased level of ROS generation. Moreover, the augmented level of endoplasmic reticulum stress-related proteins was found. The generation of ROS played a major role in cell apoptosis, as the addition of a ROS scavenger abrogated the decrease in Bcl-2 levels and cell apoptosis. The remaining apoptotic effect was correlated with inhibition of P-STAT3, a protein implicated in resistance to inducing cancer cell apoptosis. In this case, the inhibition did not decrease with the addition of ROS scavenger. Taken together, the authors found two independent 49 -mediated apoptosis pathways . The mentioned pentafluorothio analog 41 , synthesized by Linder et al., revealed strong inhibitory potential against Panc-1 with IC 50 in nanomolar concentrations, but the inhibitory potential against kappa B kinase β (IKKβ) is unknown . Xie et al., based on two known derivatives in literature, namely EF24 (later discussed in subchapter 3.2.) and EF31 (compound 53 , ), synthesized a series of analogs ( 50 – 52 ) . The rationale for the study was to obtain compound with strong inhibitory potential against IKKβ, which is a protein involved in pancreatic cancer development and progression . In the study, the piperidin-4-one ring was recognized as playing a pivotal role in the inhibition. Therefore, a series of derivatives substituted with halogen or methoxy group was obtained. It is worth noting that some of them, fluorinated or brominated, showed good inhibitory activity . Another series, represented by 54 , revealed weaker inhibitory potential than the previous one, with the strongest activity noted for derivatives with fluorine and bromine in the ortho positions . Halogen-substituents introduced a stronger inhibitory effect compared to methoxy. The final series was equipped with bulky substituents on the aryl rings. The most potent derivative 55 , substituted with phenoxyethanamine, based on the collected data, could likely be a direct inhibitor of IKKβ. Molecular modeling gave some clues about the nature of the binding. The rings were squeezed between nine hydrophobic amino acids and the dimethylaminoethoxy groups were oriented toward solvent areas. Both the protein and the compound changed conformation to adjust to each other. In the in vitro studies on three pancreatic cancer cells—Panc-1, MiaPaCa-2, and BxPC-3 the inhibitory effect towards IKKβ was reconfirmed, and the antiproliferative potential was measured. As the outcome of the study, 56 was denoted as a potent compound against pancreatic cancer, with lower IC 50 values for almost all the cell lines compared to the parental compounds . Chen et al. investigated 54 , also known as C66 curcuminoid, for its antiproliferative potential towards pancreatic cancer cells. In the study, the authors confirmed, by knocking down corresponding genes, that c-Jun N-terminal kinase plays an important role in the proliferation of pancreatic cancer cell lines. Moreover, the expression of the kinase was related to enhanced activity of pro-inflammatory factors. The inhibition of the kinase phosphorylation by 54 was confirmed in the study and further in vitro antiproliferative, anti-migration, and antimetastatic properties of the compound were evaluated. The IC 50 levels of its activity on Panc-1 and SW1990 were 113.4 and 91.83 μM, respectively . Pignanelli et al. synthesized piperidin-4-one derivatives of curcumin and screened cancer and healthy cells. Two compounds, 55 and 57 , demonstrated some promising properties and were further evaluated. The compounds contributed to intracellular ROS generation and induction of apoptosis. One aspect of the research involved the combinatory action of 56 with piperlongumine on the BxPC-3 pancreatic cancer cell line. The results indicated that 56 acted synergistically with piperlongumine in terms of ROS generation and induction of apoptosis. Importantly, this effect was insignificant in healthy cells . Novel curcumin-derivatized quaternary ammonium salts were considered as another option in the research against pancreatic cancer . One of them, 58 , presented the lowest IC 50 value on MIAPaCa-2, a pancreatic cancer cell line, compared to that on breast cell lines. The idea that underpinned the derivatization was to join Baylis–Hillman reaction bromide allylic products and quaternary ammonium curcuminoids to utilize the advantages of these two compounds. On the other hand, curcumin showed some anticancer activity, but the low stability and solubility in water did not allow further application. On the other hand, quaternary ammonium curcuminoids had good water solubility, but their cytotoxicity was low, exceeding 100 µM. The change of methyl group from the ester moiety of 58 to longer alkyl groups decreased cytotoxicity. The simplification of the N-substituent to allyl or benzyl strongly decreased cytotoxicity. The authors also attempted to replace the quaternary ammonium curcuminoid skeleton with N -methylmorpholine, which again was proven unsuccessful. Compound 58 was further evaluated in vivo on mice, where its safety was confirmed, and MIAPaCa-2 xenograft tumors (growth decreased by 42% or by 57% when used in combination with gemcitabine) . Szebeni et al. synthesized 33 novel curcuminoids and screened them in terms of antiproliferative activity on liver, lung, and pancreas cancer cell lines. Derivative 59 was chosen as the most promising compound. Interestingly, the novel compounds revealed the strongest effect on the PANC-1 cell line. Derivatives that were equipped with carboxylic, dihydroxyphenyl, or para -hydroxyl substituents in the carboxy side of the amide moiety were listed as non-active, while analogical structures with chloroacetamidomethyl moiety and with hydroxyl or methoxyl groups in the rings also had good activity, all of which was in accordance with SAR studies performed by the authors. Curcumin derivative 59 was found to accumulate in the endoplasmic reticulum and induced endoplasmic reticulum stress, which was studied by an up-regulation of genes helping to counteract the stress effect. In comparison to curcumin, the up-regulation was stronger. Furthermore, mitochondrial membrane depolarization and induction of apoptosis were noted. An interesting effect of curcumin and its cyclohexanone analogs was discovered accidentally by Revalde et al. . During optimization of the liposomal formulation of gemcitabine with curcumin, the authors found that the interaction of these two compounds was antagonistic. Further evaluation on MIA PaCa-2 and PANC-1 cells showed that the curcuminoids inhibited equilibrative nucleoside transporter 1 at concentrations between 2–20 μM and blocked the accumulation of gemcitabine and uridine. The authors concluded that curcumin is unlikely to inhibit gemcitabine uptake in tumors but might have an impact on gastrointestinal absorption. Interestingly, the EF24 (compound 93 , Figure 15) analog did not show this effect. In addition, only co-treatment caused this effect, and the sequential administration did not affect the activity of the transporter . Based on the structure-activity relationship analysis of curcumin derivatives presented in the works cited above, the following conclusions can be drawn. – ß-Diketo moiety alterations: Pyrimidinone modifications not only enhance anticancer activity but also contribute to reactive oxygen species generation . The piperidin-4-one ring plays a crucial role in inhibiting IKKβ kinase . Derivatives in which acidic hydrogens are replaced with methylamine and further substituted with groups such as carboxylic acid, dihydroxyphenyl, or parahydroxylphenyl exhibit decreased cytotoxicity, while amide formation from 2-chloroacetate increases cytotoxicity . – Ring modifications: Five-membered heterocycles containing oxygen, nitrogen, or sulfur reduce IKKβ kinase inhibition, even when the rings are methyl-substituted . Halogenated rings, particularly those with fluorine or bromine, enhance IKKβ kinase inhibition . The pentafluorothio- substitution generates strong inhibitory potential against Panc-1 cells, though its impact on IKKβ kinase remains unknown . Alkylamine substituents are recommended for consideration when designing IKKβ kinase inhibitors . Quaternary ammonium curcuminoids, despite their good solubility, exhibit low cytotoxicity . This subchapter briefly summarizes recent studies on the potential efficacy and mechanisms of action of curcumin and its analogs against five common cancers, such as colorectal, kidney, lung, and bladder cancer, as well as leukemia. Colorectal cancer, the third most common cancer worldwide, begins in the large intestine and can spread to the lower gastrointestinal tract. Its development is influenced by genetic mutations, lifestyle factors, and poor diet. Colorectal cancer carcinogenesis involves three key mechanisms: chromosomal instability, CpG island methylator phenotype, and microsatellite instability. Among environmental factors, dietary habits leading to obesity and high energy intake are important risk factors for colorectal cancer. The treatment of colorectal cancer (CRC) is influenced by several factors, such as the stage of the disease, tumor location, and the patient’s overall health. Standard treatments include surgery, chemotherapy, radiation therapy, and immunotherapy, with surgical removal of the tumor (via open or laparoscopic methods) being the primary treatment approach. These therapies are generally more effective when the cancer is diagnosed early, offering around a 90% survival rate. However, late-stage detection leads to a poorer prognosis, with survival rates dropping to approximately 15% in stage IV, highlighting the need for better detection methods and more effective treatments. Currently, chemotherapy remains the cornerstone of CRC treatment, particularly in metastatic cases. Treatment often involves combinations of fluoropyrimidines, such as 5-fluorouracil (5-FU) or capecitabine (CAPE), alongside irinotecan (IRI) or oxaliplatin (OXA), sometimes supplemented with cetuximab for patients with wild-type (wt) RAS or bevacizumab (BEVA). However, overall survival (OS) rates beyond five years are still under 15%. Curcumin, a natural compound, has been extensively studied as a potential therapeutic agent for CRC due to its demonstrated anticancer effects in both in vitro and in vivo models, as well as its low toxicity profile. Curcumin promotes cancer cell death by increasing ROS generation, which activates both intrinsic and extrinsic apoptotic pathways. Research suggests that curcumin enhances the expression of pro-apoptotic proteins like Bax and Bak while inhibiting anti-apoptotic proteins regulated by NF-κB, such as Bcl-2, Bcl-xL, XIAP, and Survivin. This leads to cytochrome c release and the induction of apoptosis. Curcumin’s anti-inflammatory and anticancer properties are largely attributed to its ability to inhibit the NF-κB pathway, which is frequently overactive in CRC and contributes to drug resistance. Beyond its role in apoptosis, curcumin also hinders CRC cell proliferation by influencing cell cycle regulation, inducing arrest at either the G0/G1 or G2/M phases. This is achieved by upregulating cyclin-dependent kinase (CDK) inhibitors like p16, p21, and p27, while inhibiting CDK2, CDK4, cyclin B, E, and D1. Additionally, curcumin reduces proliferation by suppressing cyclooxygenase-2 (COX-2) expression through the NF-κB pathway and modulating AMP-activated protein kinase (AMPK)-AKT signaling. It also affects the Wnt/β-catenin and Notch pathways, which are frequently altered in CRC cases. Despite its promising effects, curcumin’s low bioavailability and unknown interactions with other anticancer drugs limit its therapeutic potential, prompting the search for curcumin analogs with improved physicochemical properties for CRC treatment . Zhang et al. synthesized two series of curcumin derivatives in order to develop a potent chymotrypsin-like subunit of 20S proteasome, which is one of the regulators of basic cellular processes and pathways, including cell cycle, apoptosis, or DNA repair. All the obtained derivatives showed inhibition of the HCT116 (human colorectal carcinoma) cell line. Among them, 60 showed excellent inhibition of the chymotrypsin-like activity of the 20S proteasome (IC 50 = 0.835 µM), with simultaneously only a negligible effect on the other subunits—trypsin-like and peptidylglutamyl-peptide hydrolyzing . Kidney cancer, also known as renal cell carcinoma (RCC), constitutes a group of malignant tumors originating from the epithelium of the renal tubules. RCC is the most common malignant tumor of the kidney, accounting for about 90% of cases. Histologically, about 80% of these tumors are clear cell carcinomas. Kidney cancer ranks as the 15th most frequently diagnosed cancer globally, with a notably higher occurrence in developed countries. The causes of RCC are not precisely known, but risk factors include genetic predisposition (e.g., von Hippel-Lindau syndrome), smoking, hypertension, obesity (especially in women), exposure to certain chemicals, and end-stage renal failure requiring dialysis. RCC accounts for 2–3% of all malignancies, occurring most often in people aged 60–70. Men have the disease 1.5 times more often than women. About 270,000 new cases of RCC are diagnosed annually worldwide, 116,000 of which end in death. The highest incidence rates are seen in Europe, North America, and Australia, and the lowest in Asia and Africa. In Europe, the incidence is 14.5 per 100,000 men and 6.9 per 100,000 women . Curcumin and its derivatives were found to be involved in several pathways associated with the induction of renal cancer cell death and their activity. One of them concerns the modulation of ROS levels. Curcumin and its analog EF24 (later discussed in subchapter 3.2.) were able to increase the activity of peroxidase and, in this way, decrease intracellular ROS levels. Further, this effect led to a decrease in renal cancer cell migration by inhibition of collagenases/gelatinases activity . Chong et al. also reported on the suppression of one of the collagenases by curcumin. This effect of curcumin was combined with the inhibition of transcription factor—E Twenty-Six-1 and impacted by downregulation of the expression of the vascular endothelial cadherin. Both effects resulted in the impairment of the tumor’s ability to ensure blood support by vasculogenic mimicry. Other studies reported the resensitization of renal cell carcinoma when curcumin was used in combination with well-established anticancer drugs. Xu et al. tested combinatory therapy of curcumin and sunitinib and noticed a reversal of sunitinib resistance. The effect was associated with curcumin’s ability to upregulate the ADAMTS18 gene and associated with the induction of ferroptosis . Another study by Xu et al. also indicates that curcumin upregulates the ADAMTS18 gene. However, an additional effect of curcumin was noted, namely upregulation of miR-148 expression. Both effects were associated with the suppression of autophagy and a positive feedback loop between the two genes was proposed . Further study by Xu et al. revealed that the ADAMTS18 gene was upregulated through the downregulation of its methylation by AKT and NF-κB signaling pathway. Obaidi et al. found that curcumin reverses kidney cancer cells’ resistance toward TRAIL (tumor necrosis factor (TNF)-related apoptosis-inducing ligand). This was attributed to curcumin’s deregulation of miRNA expression associated with apoptosis regulation, especially let-7C. This contributed to the downregulation of the expression of cell cycle protein, and furthermore, two key glycolysis-regulating proteins were also found to a lesser extent . Chang et al. also found that curcumin had an impact on miRNA expression, which caused cancer cell death. It was also found that treatment of SK-NEP-1 cells with curcumin led to increased expression of miR-192-5p, whose native role is to downregulate the expression of PI3K and AKT proteins. Leukemia ranks as the 13th most common cancer and the 10th leading cause of cancer-related deaths globally, with over 487,000 new cases and 305,000 deaths estimated in 2022. The highest incidence rates are observed in Australia/New Zealand (with Australia having the highest rates among men worldwide), Northern America, and various regions of Europe (with Belgium leading among women). The incidence of leukemia is two to three times higher in developed countries compared to developing ones for both men and women, although mortality rates are similar, particularly among women. Leukemia encompasses a diverse group of hematopoietic cancers with distinct biological subtypes, generally classified into four major categories, each with varied causes, including genetic factors, infections, and enhanced diagnostic capabilities. Acute lymphoblastic leukemia (ALL) is more prevalent in children and exhibits a bimodal pattern, with higher incidence rates in Latin American and Asian countries. Acute myeloid leukemia (AML) is more common in adults but also affects children, with higher incidence rates in countries with a higher Human Development Index (HDI). Chronic lymphoid leukemia (CLL) is more frequent among the elderly and males, with elevated rates in North America, Oceania, and parts of Europe. In contrast, chronic myeloid leukemia (CML) is more commonly observed in adult males in higher HDI countries . To assess the efficacy of curcumin in leukemia treatment, a series of studies, both in vitro and in vivo, have been conducted. The literature contains numerous reports on the potential anticancer activity of curcumin as well as its derivatives against various types of leukemia (such as AML, CML, CLL, and ALL). The activities of curcumin and its derivatives were related, among others, to inducing apoptosis, inhibiting proliferation, ROS production, or stimulating autophagy. The mechanism of anticancer action is multidirectional and includes multiple cellular and molecular targets and pathways. The main molecular targets of curcumin in leukemia cancer cells include receptors (e.g., DR-4, DR-5), transcription factors (e.g., Notch-1, NF-κB, STAT3 and 5), kinases (e.g., ERK, JAK), growth factors (e.g., VEGF), inflammatory cytokines (e.g., IL), and others (e.g., HSP-90, Bcl) . Due to the low potency of curcumin, higher doses are required to achieve a therapeutic response in leukemia, which increases the risk of adverse effects and reduces patient compliance. To address these limitations, various derivatives have been synthesized, and combination therapies have been explored. An example would be the combination of curcumin with plant compounds such as quercetin or cannabidiol with quercetin or with chemotherapeutics like thalidomide or imatinib . In all cases, the co-administration of two or more compounds was shown to enhance the effectiveness of therapy compared to monotherapy. Except for the last one, all mentioned research involved in vitro or in vivo tests on mice models. The final study was a randomized controlled trial conducted on fifty CML patients, who were treated for 6 weeks with imatinib alone (800 mg per day) or imatinib and curcumin (800 mg per day and 15 g per day, respectively). A significant decrease in plasma NO levels and better hematological response and tolerance after a combination of imatinib and curcumin therapy, as compared to imatinib therapy alone, was demonstrated. Based on the studies, the authors concluded that curcumin can be used as an adjuvant to imatinib therapy due to its prominent anti-neoplastic activity . In the context of combined therapies, Zhang et al. presented intriguing findings. They demonstrated an interaction between curcumin and interferon signaling pathways, which could potentially provide the theoretical basis for a curcumin-interferon combination in anticancer therapies. Curcumin has been shown to induce the expression of interferon regulatory genes, particularly IFIT2, in U937 leukemia cells. The regulation of IFIT2 by exogenous expression or IFNγ treatment in K562 cells increased cell apoptosis and enhanced the anticancer effects of curcumin. Conversely, shRNA-mediated IFIT2 knockout inhibited curcumin-induced apoptosis in U937 cells. These findings open up new avenues for research and potential future combined treatments. Among modified curcumin derivatives, it is worth mentioning a compound named C817 ( 73 , ), which was tested in vitro on wild-type (WT) and imatinib-resistant mutant Abl kinases, as well as in imatinib-sensitive and resistant CML cells. Compound 73 acted as a potent inhibitor of both WT and mutant Abl kinases, effectively blocking proliferation in vitro. Moreover, it was shown that this derivative could eradicate human leukemia progenitor/stem cells. Thus, this compound might be potentially considered for the treatment of CML patients with Bcr-Abl kinase domain mutations that confer resistance to imatinib . Another curcumin analog that also revealed activity on CML K562 cells and AML HL60 is a compound named C212 ( 74 ) . It induced apoptosis and cell cycle arrest at the G2/M phase, which inhibited the growing leukemia cells at a higher efficacy than curcumin. Analog 74 was also responsible for the removal of quiescent leukemia cells, resistant to conventional treatments, which is key to preventing leukemia relapse. Its activity’s mechanism was attributed to inhibiting Hsp90, similarly as it was noted for compound C1205—derivative 75 presented in . Both analogs, 74 and 75 , demonstrated greater activity in Hsp90 inhibition and antitumor effects compared to curcumin. Compound 75 reacted with Hsp90 and degraded protein in both imatinib-sensitive K562 CML cells and imatinib-resistant K562/G01 CML cells. It also suppressed Akt, MEK, ERK, and C-RAF. The result was a significant inhibition of proliferation and induction of apoptosis in K562 and K562/G01 cells. In another study, curcumin and its derivative, named CD2066 ( 76 , ), exhibited antiviability effects in aggressive T-cell ALL in nanomolar or micromolar concentration, respectively . Both compounds interfered with Notch signaling activity (downregulation), promoted DNA damage, and induced an antiproliferative effect. Among curcumin and seventeen curcumin derivatives, compound 76 was identified as the most active anti-leukemic drug candidate. Nakamae et al. studied the role of ROS upregulation in tumor suppression. In the study, thirty-nine novel curcumin derivatives were synthesized, and their anti-proliferative and anti-tumorigenic properties were examined. All derivatives exhibited anti-proliferative activity toward human cancer cell lines, including CML-derived K562 leukemic cells in a manner sensitive to an antioxidant, N-acetyl-cysteine. C7-Curcuminoids, 61 , and its demethylated analog 62 were synthesized via aldol condensation of 2,4-pentanedione with 3,5-dimethoxybenzaldehyde . All the C5-curcuminoids were synthesized via the condensation of 1-aryl-1,3-butanediones prepared by the Claisen condensation of the corresponding acetophenone derivatives with ethyl acetate, with aromatic aldehydes. To investigate novel curcumin derivatives’ anti-proliferative activity on human tumor cells, the authors cultured K562 cells in the absence and presence of the representative compounds (50 μM) in vitro. All the compounds showed growth inhibitory activity, but in different degrees, dead cells’ induction varied from one compound to another. Next, the researchers determined and compared the GI 50 of all curcumin derivatives using K562 cells. Compounds ( 61 – 72 ) exhibited a GI 50 lower than that of curcumin in this assay in vitro. The growth inhibitory effect of curcumin derivatives was not restricted to K562 leukemic cells; other types of human cancer cell lines, including U-87 MG glioblastoma, HeLa cervical cancer, MCF-7 breast adenocarcinoma, AN3CA uterine cancer, MIA PaCa-2 and PANC-1 pancreatic cancer, and 293T human embryonic kidney cells, were sensitive to the inhibition of growth by curcumin derivatives. Suppression of the tumorigenic cell growth of human cancer cells (K562 leukemic cells) in a xenograft mouse model was also studied with curcumin derivatives. Compounds: 62 , 65 – 67 , and 69 significantly reduced the size of tumors, yet the effect was not as pronounced as for curcumin itself. Notably, curcumin and its derivatives did not induce any obvious adverse effects in the normal lineage of cells under the conditions in which curcumin and a group of curcumin derivatives sufficiently inhibit tumor cell growth in vivo, as well as no toxic effects were observed in mice . The first evidence of an increased incidence of lung cancer was noted among miners and other occupational groups during the 19th century. In the first half of the 20th century, an epidemic rise in lung cancer cases was observed. Today, lung cancer is the most common malignancy among men in many countries and remains the leading cause of cancer-related deaths worldwide. Histologically and biologically, lung cancer is a highly complex neoplasm. While sequential premalignant lesions have been well-defined for centrally arising squamous cell carcinomas, they are less well-documented for other major subtypes, including small-cell lung cancer and adenocarcinoma. The three main morphological types of premalignant lesions identified in the lung are squamous dysplasia, atypical adenomatous hyperplasia, and diffuse idiopathic pulmonary neuroendocrine cell hyperplasia . Studies on the pharmacokinetics and bioavailability of curcumin have shown that although the substance is safe and well tolerated even in very high doses, its bioavailability is limited by poor absorption and rapid elimination from the body. According to FAO/WHO recommendations, the maximum daily intake of curcumin is 0–1 mg/kg body weight, which does not cause any adverse health effects. Wahlstrom and Blennow’s pioneering 1978 study on rats showed that curcumin is mainly excreted unchanged in feces after oral and intraperitoneal administration. Some curcumin was found to appear in the bile after intravenous administration, with the main metabolites being the glucuronides tetrahydrocurcumin (THC) and hexahydrocurcumin. These studies have also shown that curcumin accumulates in the intestines and liver but is only found in trace amounts in the brain. Subsequent animal and human studies have confirmed these results, showing low levels of curcumin in the blood after oral administration. In clinical trials in cancer patients given curcumin in high doses, blood levels of the substance were low, while it reached higher levels in intestinal tissues and liver. Despite numerous studies on curcumin’s safety and efficacy, its poor bioavailability limits its use as a therapeutic agent. To overcome these limitations, researchers are testing various methods to increase curcumin’s bioavailability. These include adjuvants that block metabolic pathways, nanocurcumin, liposomes, micelles, and structural modifications such as isomerization. Innovative approaches, such as polymeric nanoparticles, have shown increased efficacy in delivering curcumin to the body. Curcumin analogs improved bioavailability and stronger anti-inflammatory and anti-tumor effects. Despite these promising results, further research is still needed on curcumin’s bioavailability and metabolism, as well as its therapeutic application. Long-term studies on its effectiveness in treating cancer and other conditions are ongoing at various research centers around the world Gyuris et al. performed in vitro cytotoxicity assays with two different lung cancer cell lines (A549 and H1975) to evaluate the new derivatives’ anticancer activities (46 compounds, divided into three series, ) . The effects of the most potent analogs (most notably 77 ) were tested against subcutaneously implanted human lung cancer (A549) in the SCID mouse xenograft model, showing significantly reduced tumor growth. Curcumin and its derivatives are also being evaluated for their potential use in the treatment of bladder cancer, a common malignancy of the urinary system arising in the tissues of the urinary bladder. Urothelial carcinoma is the most common type of bladder cancer, accounting for more than 90% of cases in industrialized countries. It is particularly common among the elderly, and risk factors include smoking, exposure to chemicals and chronic cystitis. Treatment primarily involves transurethral resection and intravesical infusion of chemotherapy, but may also include laser ablation, Bacillus Calmette-Guerin bladder treatment, radiation therapy, chemotherapy or surgical removal of part or all of the bladder. Natural curcuminoid mixture (curcumin, demethoxycurcumin and bisdemethoxycurcumin) presents activity towards bladder cancer as it inhibits cell proliferation and migration, while promoting apoptosis through the suppression of MMP signaling pathways . Curcumin itself was also found to reduce bladder cancer tumor growth in animal models . One potential approach to increasing biological activity is the introduction of fluorine atoms into a molecule. Based on this principle, our group synthesized a series of fourteen curcumin fluoro-analogs . These compounds were tested in vitro against bladder cancer cell lines 5637 and SCaBER. The study showed that the presence of the BF 2 group at the diketone fragment is crucial for cytotoxic activity, as compounds with an unaltered 3,5-diketone unit showed significantly lower activity against the bladder cancer cell lines. Additionally, curcumin-BF 2 adducts with a methoxyl group demonstrated higher activity compared to those with hydroxyl or fluorine groups, and compounds with a single fluorine atom were more effective than those with two fluorine atoms. It was also found that the distribution of substituents in the benzene ring significantly affects the anticancer activity of curcumin analogs. Among the synthesized BF 2 adducts, derivatives with 3-fluoro-4-methoxyphenyl group proved to be the most cytotoxic. Compound 78 exhibited IC 50 values of 6.49 μM and 3.31 μM for the 5637 and SCaBER cell lines, respectively, after a 24-hour incubation, demonstrating superior efficacy compared to curcumin. The introduction of a BF 2 moiety to the carbonyl groups was also applied by Lazewski et al. . The authors obtained a series of curcumin derivatives by substituting the phenolic groups with poly(ethylene glycol) (PEG) chains and adding a BF 2 moiety to the carbonyl groups. The compounds were tested for their cytotoxic activity against two bladder cancer cell lines, 5637 and SCaBER. Cell viability was analyzed under normoxic and hypoxic conditions (1% oxygen). The study showed that in a concentration-dependent manner, PEGylated curcumin inhibited the cell cycle in the G2/M phase and induced the expression of proteins involved in cell cycle regulation, cell proliferation and response to hypoxic conditions. Compound 79 under hypoxia, but not normoxia, increased the expression of stress-related proteins associated with c-Jun N-terminal kinase signaling, angiogenesis, ECM patterning and the p21 signaling pathway. To improve the pharmacokinetic properties and enhance the biological effects of curcumin, Bakun et al. synthesized and characterized a series of 30 compounds inspired by its structure, which were evaleated towards bladder cancer cell lines 5637 and SCaBER. Compound 80 was proven to have the best activity, showing IC 50 values of 1.2 μM and 2.2 μM against 5637 and SCaBER cell lines, respectively, after 24 hours. Analysis of the structure-activity relationship of the most active compounds showed that symmetric curcuminoids exhibited higher anti-tumor activity compared to quasi-curcuminoids. Moreover, modification of curcumin’s β-diketone moiety with the BF 2 grouping significantly enhanced its cytotoxic activity. Summarizing the above-discussed publications, one can draw conclusions about how specific structural modifications concerning the hydroxyl and methoxyl groups or ß-diketo moiety influence anticancer activity. – Hydroxyl and methoxyl groups: Modifications in methylation patterns, such as the addition of a third hydroxyl or methoxyl group, as well as complete methylation or demethylation, did not result in enhanced activity against CML-derived K562 leukemic cells . Curcumin-BF 2 adducts containing a methoxyl group exhibited greater activity against bladder cancer cells (5637 and SCaBER) than those with hydroxyl or fluorine groups . – ß-Diketo moiety: Pyrimidinone derivatives demonstrated improved anticancer activity, and the incorporation of bulky N-substitutions should be considered when designing proteasome inhibitors derived from curcumin . Shortening the alkyl chain did not lead to increased activity against CML-derived K562 leukemic cells . Complexation of diketone moiety with a BF 2 group increased cytotoxicity against SCaBER and 5637—bladder cancer lines . Curcumin, a natural compound found in turmeric, has demonstrated significant anticancer properties, primarily by modulating cell signaling pathways and inducing apoptosis in various cancer types. However, its clinical application is limited due to poor solubility, low stability, and reduced bioavailability. Recent research has focused on structural modifications of the curcumin molecule, such as altering functional groups or introducing substituents, to enhance its pharmacokinetic properties and improve its anticancer potency and selectivity. Curcumin not only reveals an anti-tumor effect but also reverses the effect of multidrug resistance (MDR) in tumor cells. When used in combination therapy, curcumin can act as a factor that sensitizes neoplastic cells to the action of anticancer drugs, which may result in an increase in their effectiveness. The antitumor activity of curcumin measured by the IC 50 value, depending on the type of tumor and the cell line used in the study, most often ranges from 2–50 µM (colon cancer, breast cancer, ovarian cancer, liver cancer, gastric cancer, lung cancer, human esophageal carcinoma, pancreatic cancer, osteosarcoma cell carcinoma). The effect of reducing the MDR phenomenon is also observed in the case of curcumin derivatives. Changes in the curcumin structure consist of the modification of the side chain on the benzene ring, hydrogenation of the seven-carbon chain, replacement of the β-diketone structure with, e.g., an isoxazole or pyrazole ring, complex compounds, hydrogen substitution of the methylene bridge, replacement of benzene rings with other aromatic heterocyclic rings. Complex modifications include the so-called mixed modifications combining all the previous ones . 3.1. Modifications of Substituents on Benzene Rings or Hydrogenations of Alkene Chains Curcumin ester or demethoxy derivatives were characterized by better stability, and their more favorable antitumor activity was explained, among others, by induction of rapid double-strand breaks of DNA, inhibition of mitosis, and downregulation of P-gp and upregulation of pro-apoptotic signaling (p53/p21 and p16/Rb pathways) . The increase in cytotoxicity of demethoxy curcumin ( 82 ) observed in studies on colon cancer cell lines (HCT 11 cell) is explained by its greater stability compared to curcumin ( 81 ) (IC 50 : 3.3, 38.2 µM) . Both 70 , 82 , and 83 (bisdemetoxycurcumin) demonstrated efficacy against vincristine-resistant (Kb-v1 cell; IC 50 : 23.5, 35.8, 93.0µM, respectively) and wild-type (Kb-3-1 cell; IC 50 : 24.0, 33.3, 85.0 µM, respectively) sensitive cervical cancer cells . The affinity of curcumin derivatives for aldehyde dehydrogenase-1 (ALDH-1) ( 86 > 70 > 81 > 83 > 82 ) and GSK-3β ( 84 > 81 > 86 > 85 > 82 > 83 ) was also observed in breast cancer . In contrast, the introduction of 4 ether groups ( 87 , ) resulted in cell cycle inhibition in the G2/M phase and apoptosis in chronic lymphocytic leukemia cells (K562dox, MDR cell line with high P-gp expression; K562, CML cell line) and simultaneous activation caspase 3 and decreased parp-1 and P-gp levels. For this curcumin derivative, a 10-fold greater anti-tumor and anti-p-gp activity was observed than for curcumin. Thus, such modifications have a positive effect on MDR inhibition . The substitution of the heterocyclic ring in place of benzene one included the replacement of phenol with a furan ring to increase bioavailability and anti-tumor activity. The results of cytotoxicity studies indicated that such modifications may reverse MDR in a different way, i.e., by lowering the level of the MDR protein Trx in lung cancer cells . Tetrahydrocurcumin (compound 86 , ) is a carbon chain hydrogenation product and is more hydrophilic and less photosensitizing than curcumin, which facilitates its water solubility, delivery to cancer cells and increases its effectiveness as a free radical scavenger. Therefore, THC may also be a potential MDR reversal agent with the function of modulating the three-drug transporters ABC: ABCB1, ABCG2, and ABCC1 in human cervical cancer. Moreover, THC inhibits caspase-3 activity and levels of protein X associated with B-cell lymphoma 2, induces autophagy in human myeloid leukemia (Ara-C-resistant HL60 cell), influences CSCs suppression and regulation of apoptosis in esophageal squamous cell carcinoma (TE-1 cells resistant to 5-FU), increases the accumulation of Rh123 and calcein in breast and cervical cancer cells (Kb v-1 and MRP1-HEK293 without affecting Kb 3-1 cells) and increases the concentrations of etoposide, mitoxantrone, and vinblastine in cells . Lai et al. confirmed the chemo-preventive properties of tetrahydrocurcumin in the prophylaxis of colon cancer. Compound 83 showed pro-apoptotic activity through suppression of Wnt-1, expression of the β-catenin protein, GSK-3β phosphorylation, and reduction of the connexin-43 protein level. As a result, inhibition of colon polyp formation was observed by limiting the formation of gap junctions . 3.2. Modifications of Diketone Systems Modifying the diketone system as a form of molecular stabilization leading to an enhancement of the MDR inversion effect has proved to be technically difficult. However, the introduction of a pyrazole ring at this point resulted in the inhibition rate of MCF-7/HER18 cells and MDA-MB 435/HER2 cells being over 40% higher than that of curcumin. These data suggest that this derivative can reverse the MDR of two types of cell lines by reducing HER2 protein expression and blocking the breast cancer cell cycle at the G2/M stage. Other modifications resulted in the observed ability to induce cell apoptosis by reducing the activation of NF-κB and its anti-apoptotic factors (Bcl-2, Bcl-x, survivin, and XIAP in HA22T/VGH and MCF-7/R cells) . Among the 20 newly obtained curcuminoids and their pyrazole-modified analogs, synthesized by Pham et al., curcumin derivative ( 89a , IC 50 = 1.53 μM) revealed the highest antitumor activity (liver cancer cell line HepG2). Curcuminoids with a pyrazole ring ( 89a – 89d , ) revealed 2–23 times higher antitumor activity compared to their parent structures. The introduction of a fluorine atom as a substituent in the ring significantly weakened or deprived the tested compounds of the desired activity. The authors indicated that hydroxylation of curcumin at position 3 alone increased in activity (IC 50 = 35.47 µM) compared to curcumin (IC 50 = 20.70 µM). Curcuminoids acted as Michael acceptors that reacted with GST and GSH in the cell. In contrast, pyrazole analogs were not susceptible to nucleophilic additions with -SH groups in the detoxification mechanism. The elimination of the ketone group, together with the shortening of the alkyl chain of curcumin, led to the preparation of compounds 91 and 92 , which showed a cytotoxic effect on various human breast cancer cells (MCF-7: ER+, ER−). The obtained IC 50 values for 91 and 92 were 2.4 and 1.7 µM and could be compared to curcumin (1–7.5 µM) . Similar observations were made by Azzi et al., who found EC 50 values for compound 91 to be 2.9 µM and 6.4 µM in studies on MCF7 and OVCAR-3 cell lines, respectively . Cridge et al. showed that these compounds inhibited Akt, STAT3, and HER2/Neu and activated the process of apoptosis. Their synergy with doxorubicin was also found . The molecular action of curcumin is related to the removal of reactive oxygen species (ROS) responsible for cell damage. The antioxidant function is given to the curcumin molecule by a diketone residue and two oxidizable phenolic groups and methoxy groups as necessary for the antioxidant action. The study on D. discoideum showed the highest and the fastest antioxidant activity of compounds 88 and 3 . The effect of curcumin lasted for a long time. The remaining compounds revealed significantly reduced or no antioxidant activity. Likewise, the results for curcumin and its derivatives regarding anti-inflammatory activity and antiproliferative disorders did not relate to their ability to modulate ROS. Cocorocchio et al. showed that the action of curcumin derivatives is related to the regulation of cell activity by direct binding of the psrA protein. The authors observed the loss of this protein in the cases of 81 , 82 , and 93 . The gene of D. discoideum psrA encodes the ortholog of the regulatory subunit B56 of mammalian protein phosphatase 2A (PP2A). In D. discoideum , this protein has been shown to regulate cell chemotaxis and differentiation by negatively modulating the function of glycogen synthase 3 (GSK3) kinase . Compound 91 , which contains two hydroxyl residues, presented better performance than curcumin. The presence of the second hydroxyl group was supposed to increase the binding efficiency of the compound with β-amyloid aggregates. An example of an effective modification of compound 91 is its derivative 94 —the N -maleic acid derivative with strong antiproliferative activity on the H441 lung adenocarcinoma cell line (IC 50 = 1 µM) . The molecular basis of its anti-cancer activity was related to the presence of the second α, β-unsaturated carbonyl functional group. The ability of compound 89 to deregulate cellular expression of genes and signaling pathways involved in redox processes and glutamate metabolism, leading to increased oxidative stress and cell death, was observed. Compound 95 also exerted the inducing effect of Nrf2, consistent with the activation of the phase II response involved in the protection of cells from cytotoxicity of oxidative stress. To develop a suitable platform for parenteral delivery, the highly hydrophobic molecule 94 was complexed with cyclodextrins and incorporated into liposomes. In vivo studies in rats showed that the tumor volume was reduced to about half of its original size after 20 days of administration of 95 in liposomes . Derivatives, which constitute curcumin-Ru complexes of the structure shown in (compound 90 ), inhibit the effect of P-gp on MDR reversal, which may be of importance in the treatment of ovarian cancer as an alternative to platinum complexes . The obtained Ru II -arene complexes are air-stable in solution and solid-state and are well soluble in most organic solvents. The antitumor activity of ruthenium(II) aromatic derivatives (p-cymene, benzene, hexamethylbenzene) containing modified curcumin ligands was assessed using five tumor cell lines. The best activities [IC 50 (µM)] were observed for breast cancer cell lines (MCF7, IC 50 9.7 µM), ovarian cancer (A2780, IC 50 9.4 µM), glioblastoma multiforme (U-87, IC 50 9.4 µM), and lung cancer (A549, IC 50 13.7 µM) and colon cancer (HCT116, IC 50 15.5 µM). The anti-tumor activity of this group of compounds was manifested by their pro-apoptotic activity. This effect was approximately twice as strong as that of the corresponding curcumin complex in breast and ovarian cancer cells. The replacement of the hydroxyl groups of curcumin with methoxyl groups in the complexes resulted in the observed increase in antitumor activity. Their increased cytotoxicity to cancer cells correlates with the increased lipophilicity of curcuminoid. These compounds do not contain hydroxyl groups, which are responsible for the antioxidant properties of ligands. Therefore, the antioxidant properties were not observed in the described Ru II complexes. Carruso et al. showed that with only one exception (ligand CurcII), the chemical substitution on the backbone of curcumin seems to produce a biologically more effective molecule, which presents lower IC 50 than their Ru(II) complexes. 3.3. Modifications of a Methylene Group Among the curcumin derivatives obtained by modification of active methylene, two deserve attention . Compound 95 (CDF) inactivated mir-21, which led to the reactivation of PTEN-tumor suppressor proteins mir-200b and mir-200c and inactivation of the phosphorylated material. Moreover, in colon cancer, CDF inhibited the MDR of 5-FU and oxaliplatin-resistant colon cancer cells by triggering the mir-21-pten-akt pathway . In contrast, in the cells of the chemo-resistant human colon cancer cell line (HCT116CR), CDF increased the expression of mir-34a and mir-34c, which may also inhibit the proliferation of human prostate cancer cells. In contrast, the replacement of the methylene group with a methyleneoxy group (compound 96 ) resulted in an increase in the activity of the 38 kDa protein kinase (p38), a decrease in the activity of c-jun-terminal kinase (Jnk) and signal-regulated extracellular kinase (Erk), and inhibition of the effect of P-gp on MDR without affecting the expression level of this protein in gastric, lung and liver cancer . The introduction of a 3-methoxy-4-hydroxybenzyl (compound 97 ) substituent into the methylene chain led to an antiproliferative active compound against KM12 and SW480 mouse colon cancer cells. This effect was dose- and time-dependent and was likely due to inhibition of NF-κB activity by inhibiting IκBα phosphorylation. The observed cell growth suppression was probably five to seven times greater than the effect of curcumin . In turn, a strong anti-tumor effect was observed when testing compound 98 equipped with an N -substituted piperidine ring. The compound showed antiproliferative activity against breast cancer cells (MCF-7; IC 50 1.5 µM) . In the context of its multidirectional action, compound 99 constitutes an example of a curcumin analog originally designed as a metalloproteinase inhibitor with anti-inflammatory properties. However, this compound, in combination with SBT-1214 (a new-generation taxoid), contributed significantly to the deaths of highly metastatic CSC cells of the prostate and colon. This effect was much stronger than when the compounds were used separately . 3.4. Mixed Modifications and Hybrids The introduction of the 4-ketopiperidine ring instead of the diketone moiety ( , compounds 100 and 101 ) resulted in the preparation of anti-tumor active derivatives. Their cytotoxicity was confirmed for lung adenocarcinoma cells (ALK + H1322) in the sub-micromolar range. These derivatives, compared to crizotinib, presented little or no direct inhibitory effect on ALK. Thus, since the curcumin derivatives 100 and 101 and crizotinib acted independently, combination therapy might be an effective lung cancer treatment strategy. The compound 101 (cell line H3122, IC 50 = 0.7 µM) proved to be particularly promising. In contrast, 102 (cell line H3122, IC 50 = 1.1 μM) also proved to be effective and non-toxic in breast cancer xenograft models, which justifies the interest in this type of modification of the curcumin molecule . Curcumin analog—compound 102 showed increased ROS production and decreased oxygen consumption in HCT-116 colon cancer cells. Derivatives of compound 103 exhibited antiproliferative activity by interfering with mitochondrial function also against HCT116 and HT29 cell lines, achieving IC 50 values ranging from sub-micromolar to nano-molar. It was shown that it was the amide carbonyl groups that significantly contributed to the cytotoxic activity of these derivatives . The curcumin analog 103 , containing the inden-2-one ring, was active against prostate cancer BxPC-3 cells, pancreatic cancer, HT-29 colon cancer cells, H1299 lung cancer cells, and non-cancerous human prostate epithelial cells (RWPE-1). Its cytotoxic and antiproliferative activity against all cell lines was 20 times stronger than that of curcumin . The inden-2-one derivative (compound 101 , ) was effective in an anti-tumor study using prostate cancer cells (PC-3; IC 50 0.64 µM). In the same study, curcumin reached an IC 50 of 19.98 µM. The IC 50 value of compound 102 in RWPE-1 cells was higher than in PC-3 cells, indicating that compound 101 is more toxic to cancer cells than to non-cancer cells . Initial in vitro studies of novel boron derivatives ( 105a – c , ) that resulted from the replacement of the aromatic ring with an ortho-carborane cage showed significant cytotoxic activity, with EC 50 values ranging from 1.8 to 5.5 µM. The studies were performed with the use of MCF7 and OVCAR-3 (human ovarian adenocarcinoma) cell lines. These compounds additionally inhibited the formation of β-amyloid aggregates seen in Alzheimer’s disease . Another approach in the synthesis of curcumin derivatives is the synthesis of curcumin-resveratrol hybrids ( 106 – 108 ) . In studies on three tumor cell lines MCF-7 (breast), A549 (lung) and HepG2 (liver), their cytotoxic effect were confirmed by observing lower IC 50 values on MCF-7 cells (IC 50 : 106 40.49 µM; 107 19.09 µM; 108 42.99 µM) compared to the effects of curcumin (IC 50 68.25 µM), resveratrol (IC 50 128.85 µM) or combined administration of these compounds (IC 50 61.71 µM). In the treated cells, the authors observed a decrease in the G0/G1 and S populations while an increase in the G2/M population. A significant increase in CDKN1A mRNA was also noted in the samples treated with 104 compared to the samples treated with the combined use of resveratrol and curcumin at the same concentration. Thus, the effect of the hybrid in promoting p21 regulation in MCF-7 cells is more potent compared to the combined use of resveratrol and curcumin. The p21 protein belongs to the Cip/Kip family of proteins that promote cell cycle arrest by binding to cyclin-dependent kinases (CDKs). The significant reduction in mRNA abundance for the three mitotic kinases (aurora A, aurora B, and PLK1) in the 107 -treated cultures compared to the control group was less than when compared to the combined use of curcumin and resveratrol but sufficient to inhibit mitosis . Curcumin ester or demethoxy derivatives were characterized by better stability, and their more favorable antitumor activity was explained, among others, by induction of rapid double-strand breaks of DNA, inhibition of mitosis, and downregulation of P-gp and upregulation of pro-apoptotic signaling (p53/p21 and p16/Rb pathways) . The increase in cytotoxicity of demethoxy curcumin ( 82 ) observed in studies on colon cancer cell lines (HCT 11 cell) is explained by its greater stability compared to curcumin ( 81 ) (IC 50 : 3.3, 38.2 µM) . Both 70 , 82 , and 83 (bisdemetoxycurcumin) demonstrated efficacy against vincristine-resistant (Kb-v1 cell; IC 50 : 23.5, 35.8, 93.0µM, respectively) and wild-type (Kb-3-1 cell; IC 50 : 24.0, 33.3, 85.0 µM, respectively) sensitive cervical cancer cells . The affinity of curcumin derivatives for aldehyde dehydrogenase-1 (ALDH-1) ( 86 > 70 > 81 > 83 > 82 ) and GSK-3β ( 84 > 81 > 86 > 85 > 82 > 83 ) was also observed in breast cancer . In contrast, the introduction of 4 ether groups ( 87 , ) resulted in cell cycle inhibition in the G2/M phase and apoptosis in chronic lymphocytic leukemia cells (K562dox, MDR cell line with high P-gp expression; K562, CML cell line) and simultaneous activation caspase 3 and decreased parp-1 and P-gp levels. For this curcumin derivative, a 10-fold greater anti-tumor and anti-p-gp activity was observed than for curcumin. Thus, such modifications have a positive effect on MDR inhibition . The substitution of the heterocyclic ring in place of benzene one included the replacement of phenol with a furan ring to increase bioavailability and anti-tumor activity. The results of cytotoxicity studies indicated that such modifications may reverse MDR in a different way, i.e., by lowering the level of the MDR protein Trx in lung cancer cells . Tetrahydrocurcumin (compound 86 , ) is a carbon chain hydrogenation product and is more hydrophilic and less photosensitizing than curcumin, which facilitates its water solubility, delivery to cancer cells and increases its effectiveness as a free radical scavenger. Therefore, THC may also be a potential MDR reversal agent with the function of modulating the three-drug transporters ABC: ABCB1, ABCG2, and ABCC1 in human cervical cancer. Moreover, THC inhibits caspase-3 activity and levels of protein X associated with B-cell lymphoma 2, induces autophagy in human myeloid leukemia (Ara-C-resistant HL60 cell), influences CSCs suppression and regulation of apoptosis in esophageal squamous cell carcinoma (TE-1 cells resistant to 5-FU), increases the accumulation of Rh123 and calcein in breast and cervical cancer cells (Kb v-1 and MRP1-HEK293 without affecting Kb 3-1 cells) and increases the concentrations of etoposide, mitoxantrone, and vinblastine in cells . Lai et al. confirmed the chemo-preventive properties of tetrahydrocurcumin in the prophylaxis of colon cancer. Compound 83 showed pro-apoptotic activity through suppression of Wnt-1, expression of the β-catenin protein, GSK-3β phosphorylation, and reduction of the connexin-43 protein level. As a result, inhibition of colon polyp formation was observed by limiting the formation of gap junctions . Modifying the diketone system as a form of molecular stabilization leading to an enhancement of the MDR inversion effect has proved to be technically difficult. However, the introduction of a pyrazole ring at this point resulted in the inhibition rate of MCF-7/HER18 cells and MDA-MB 435/HER2 cells being over 40% higher than that of curcumin. These data suggest that this derivative can reverse the MDR of two types of cell lines by reducing HER2 protein expression and blocking the breast cancer cell cycle at the G2/M stage. Other modifications resulted in the observed ability to induce cell apoptosis by reducing the activation of NF-κB and its anti-apoptotic factors (Bcl-2, Bcl-x, survivin, and XIAP in HA22T/VGH and MCF-7/R cells) . Among the 20 newly obtained curcuminoids and their pyrazole-modified analogs, synthesized by Pham et al., curcumin derivative ( 89a , IC 50 = 1.53 μM) revealed the highest antitumor activity (liver cancer cell line HepG2). Curcuminoids with a pyrazole ring ( 89a – 89d , ) revealed 2–23 times higher antitumor activity compared to their parent structures. The introduction of a fluorine atom as a substituent in the ring significantly weakened or deprived the tested compounds of the desired activity. The authors indicated that hydroxylation of curcumin at position 3 alone increased in activity (IC 50 = 35.47 µM) compared to curcumin (IC 50 = 20.70 µM). Curcuminoids acted as Michael acceptors that reacted with GST and GSH in the cell. In contrast, pyrazole analogs were not susceptible to nucleophilic additions with -SH groups in the detoxification mechanism. The elimination of the ketone group, together with the shortening of the alkyl chain of curcumin, led to the preparation of compounds 91 and 92 , which showed a cytotoxic effect on various human breast cancer cells (MCF-7: ER+, ER−). The obtained IC 50 values for 91 and 92 were 2.4 and 1.7 µM and could be compared to curcumin (1–7.5 µM) . Similar observations were made by Azzi et al., who found EC 50 values for compound 91 to be 2.9 µM and 6.4 µM in studies on MCF7 and OVCAR-3 cell lines, respectively . Cridge et al. showed that these compounds inhibited Akt, STAT3, and HER2/Neu and activated the process of apoptosis. Their synergy with doxorubicin was also found . The molecular action of curcumin is related to the removal of reactive oxygen species (ROS) responsible for cell damage. The antioxidant function is given to the curcumin molecule by a diketone residue and two oxidizable phenolic groups and methoxy groups as necessary for the antioxidant action. The study on D. discoideum showed the highest and the fastest antioxidant activity of compounds 88 and 3 . The effect of curcumin lasted for a long time. The remaining compounds revealed significantly reduced or no antioxidant activity. Likewise, the results for curcumin and its derivatives regarding anti-inflammatory activity and antiproliferative disorders did not relate to their ability to modulate ROS. Cocorocchio et al. showed that the action of curcumin derivatives is related to the regulation of cell activity by direct binding of the psrA protein. The authors observed the loss of this protein in the cases of 81 , 82 , and 93 . The gene of D. discoideum psrA encodes the ortholog of the regulatory subunit B56 of mammalian protein phosphatase 2A (PP2A). In D. discoideum , this protein has been shown to regulate cell chemotaxis and differentiation by negatively modulating the function of glycogen synthase 3 (GSK3) kinase . Compound 91 , which contains two hydroxyl residues, presented better performance than curcumin. The presence of the second hydroxyl group was supposed to increase the binding efficiency of the compound with β-amyloid aggregates. An example of an effective modification of compound 91 is its derivative 94 —the N -maleic acid derivative with strong antiproliferative activity on the H441 lung adenocarcinoma cell line (IC 50 = 1 µM) . The molecular basis of its anti-cancer activity was related to the presence of the second α, β-unsaturated carbonyl functional group. The ability of compound 89 to deregulate cellular expression of genes and signaling pathways involved in redox processes and glutamate metabolism, leading to increased oxidative stress and cell death, was observed. Compound 95 also exerted the inducing effect of Nrf2, consistent with the activation of the phase II response involved in the protection of cells from cytotoxicity of oxidative stress. To develop a suitable platform for parenteral delivery, the highly hydrophobic molecule 94 was complexed with cyclodextrins and incorporated into liposomes. In vivo studies in rats showed that the tumor volume was reduced to about half of its original size after 20 days of administration of 95 in liposomes . Derivatives, which constitute curcumin-Ru complexes of the structure shown in (compound 90 ), inhibit the effect of P-gp on MDR reversal, which may be of importance in the treatment of ovarian cancer as an alternative to platinum complexes . The obtained Ru II -arene complexes are air-stable in solution and solid-state and are well soluble in most organic solvents. The antitumor activity of ruthenium(II) aromatic derivatives (p-cymene, benzene, hexamethylbenzene) containing modified curcumin ligands was assessed using five tumor cell lines. The best activities [IC 50 (µM)] were observed for breast cancer cell lines (MCF7, IC 50 9.7 µM), ovarian cancer (A2780, IC 50 9.4 µM), glioblastoma multiforme (U-87, IC 50 9.4 µM), and lung cancer (A549, IC 50 13.7 µM) and colon cancer (HCT116, IC 50 15.5 µM). The anti-tumor activity of this group of compounds was manifested by their pro-apoptotic activity. This effect was approximately twice as strong as that of the corresponding curcumin complex in breast and ovarian cancer cells. The replacement of the hydroxyl groups of curcumin with methoxyl groups in the complexes resulted in the observed increase in antitumor activity. Their increased cytotoxicity to cancer cells correlates with the increased lipophilicity of curcuminoid. These compounds do not contain hydroxyl groups, which are responsible for the antioxidant properties of ligands. Therefore, the antioxidant properties were not observed in the described Ru II complexes. Carruso et al. showed that with only one exception (ligand CurcII), the chemical substitution on the backbone of curcumin seems to produce a biologically more effective molecule, which presents lower IC 50 than their Ru(II) complexes. Among the curcumin derivatives obtained by modification of active methylene, two deserve attention . Compound 95 (CDF) inactivated mir-21, which led to the reactivation of PTEN-tumor suppressor proteins mir-200b and mir-200c and inactivation of the phosphorylated material. Moreover, in colon cancer, CDF inhibited the MDR of 5-FU and oxaliplatin-resistant colon cancer cells by triggering the mir-21-pten-akt pathway . In contrast, in the cells of the chemo-resistant human colon cancer cell line (HCT116CR), CDF increased the expression of mir-34a and mir-34c, which may also inhibit the proliferation of human prostate cancer cells. In contrast, the replacement of the methylene group with a methyleneoxy group (compound 96 ) resulted in an increase in the activity of the 38 kDa protein kinase (p38), a decrease in the activity of c-jun-terminal kinase (Jnk) and signal-regulated extracellular kinase (Erk), and inhibition of the effect of P-gp on MDR without affecting the expression level of this protein in gastric, lung and liver cancer . The introduction of a 3-methoxy-4-hydroxybenzyl (compound 97 ) substituent into the methylene chain led to an antiproliferative active compound against KM12 and SW480 mouse colon cancer cells. This effect was dose- and time-dependent and was likely due to inhibition of NF-κB activity by inhibiting IκBα phosphorylation. The observed cell growth suppression was probably five to seven times greater than the effect of curcumin . In turn, a strong anti-tumor effect was observed when testing compound 98 equipped with an N -substituted piperidine ring. The compound showed antiproliferative activity against breast cancer cells (MCF-7; IC 50 1.5 µM) . In the context of its multidirectional action, compound 99 constitutes an example of a curcumin analog originally designed as a metalloproteinase inhibitor with anti-inflammatory properties. However, this compound, in combination with SBT-1214 (a new-generation taxoid), contributed significantly to the deaths of highly metastatic CSC cells of the prostate and colon. This effect was much stronger than when the compounds were used separately . The introduction of the 4-ketopiperidine ring instead of the diketone moiety ( , compounds 100 and 101 ) resulted in the preparation of anti-tumor active derivatives. Their cytotoxicity was confirmed for lung adenocarcinoma cells (ALK + H1322) in the sub-micromolar range. These derivatives, compared to crizotinib, presented little or no direct inhibitory effect on ALK. Thus, since the curcumin derivatives 100 and 101 and crizotinib acted independently, combination therapy might be an effective lung cancer treatment strategy. The compound 101 (cell line H3122, IC 50 = 0.7 µM) proved to be particularly promising. In contrast, 102 (cell line H3122, IC 50 = 1.1 μM) also proved to be effective and non-toxic in breast cancer xenograft models, which justifies the interest in this type of modification of the curcumin molecule . Curcumin analog—compound 102 showed increased ROS production and decreased oxygen consumption in HCT-116 colon cancer cells. Derivatives of compound 103 exhibited antiproliferative activity by interfering with mitochondrial function also against HCT116 and HT29 cell lines, achieving IC 50 values ranging from sub-micromolar to nano-molar. It was shown that it was the amide carbonyl groups that significantly contributed to the cytotoxic activity of these derivatives . The curcumin analog 103 , containing the inden-2-one ring, was active against prostate cancer BxPC-3 cells, pancreatic cancer, HT-29 colon cancer cells, H1299 lung cancer cells, and non-cancerous human prostate epithelial cells (RWPE-1). Its cytotoxic and antiproliferative activity against all cell lines was 20 times stronger than that of curcumin . The inden-2-one derivative (compound 101 , ) was effective in an anti-tumor study using prostate cancer cells (PC-3; IC 50 0.64 µM). In the same study, curcumin reached an IC 50 of 19.98 µM. The IC 50 value of compound 102 in RWPE-1 cells was higher than in PC-3 cells, indicating that compound 101 is more toxic to cancer cells than to non-cancer cells . Initial in vitro studies of novel boron derivatives ( 105a – c , ) that resulted from the replacement of the aromatic ring with an ortho-carborane cage showed significant cytotoxic activity, with EC 50 values ranging from 1.8 to 5.5 µM. The studies were performed with the use of MCF7 and OVCAR-3 (human ovarian adenocarcinoma) cell lines. These compounds additionally inhibited the formation of β-amyloid aggregates seen in Alzheimer’s disease . Another approach in the synthesis of curcumin derivatives is the synthesis of curcumin-resveratrol hybrids ( 106 – 108 ) . In studies on three tumor cell lines MCF-7 (breast), A549 (lung) and HepG2 (liver), their cytotoxic effect were confirmed by observing lower IC 50 values on MCF-7 cells (IC 50 : 106 40.49 µM; 107 19.09 µM; 108 42.99 µM) compared to the effects of curcumin (IC 50 68.25 µM), resveratrol (IC 50 128.85 µM) or combined administration of these compounds (IC 50 61.71 µM). In the treated cells, the authors observed a decrease in the G0/G1 and S populations while an increase in the G2/M population. A significant increase in CDKN1A mRNA was also noted in the samples treated with 104 compared to the samples treated with the combined use of resveratrol and curcumin at the same concentration. Thus, the effect of the hybrid in promoting p21 regulation in MCF-7 cells is more potent compared to the combined use of resveratrol and curcumin. The p21 protein belongs to the Cip/Kip family of proteins that promote cell cycle arrest by binding to cyclin-dependent kinases (CDKs). The significant reduction in mRNA abundance for the three mitotic kinases (aurora A, aurora B, and PLK1) in the 107 -treated cultures compared to the control group was less than when compared to the combined use of curcumin and resveratrol but sufficient to inhibit mitosis . The curcuminoid structure has become the starting point for the development of compounds with a wide range of activities: antitumor, anti-inflammatory, compounds effective in the treatment of neurodegenerative diseases, etc. Literature data confirm the growing interest in this polyphenol and indicate its numerous benefits. Thus, sensible structural modifications, combined with innovative technological strategies, will overcome some of curcumin’s limitations, such as poor stability and low solubility in physiological conditions that preclude its clinical application . Curcumin reveals antiproliferative activity in many cancers, inhibits transcription factors, modulating the activity of growth factor receptors and cell adhesion molecules involved in angiogenesis, tumor growth, and metastasis. There is also the possibility of its influence on the inhibition of telomerase. Meanwhile, curcuminoids show a bifunctional effect by blocking the anti-apoptotic signaling of NF-κB but also by blocking the anti-oncogenic effect of STAT-1 and the production of interferon-γ. In contrast, Ga-curcuminoid complexes showed potential for use as radiotracers in the detection of lung cancer . Curcumin derivatives and analogs also constitute a widely studied group of compounds with anti-inflammatory properties. The most important molecular target of their action is NF-κB responsible for the regulation of the immune and inflammatory responses. Some derivatives revealed both anti-proliferative and anti-inflammatory effects. Chainoglou et al. considered the basic features linking the activity of this group of compounds with the molecular target or signaling pathway: the presence of heterocyclic aromatic rings (thienyl, pyrazolyl, pyrimidinyl), hydroxyl and methoxy groups, removal of substituents, the introduction of lipophilic substituents, bromines, diaryl pentanoid rings or carbon linkers to increase antiproliferative activity. Hybridization also appears to have an impact on these properties . Curcumin not only has an anti-tumor effect but also reverses the effect of MDR in tumor cells. In combination therapy, curcumin, in combination with chemotherapeutic agents, can act as a factor that sensitizes neoplastic cells to the action of anticancer drugs, which may result in an increase in their effectiveness. The antitumor activity of curcumin measured by the IC 50 value, depending on the type of tumor and the cell line used in the study, most often ranges from 2–50 µM (colon cancer, breast cancer, ovarian cancer, liver cancer, gastric cancer, lung cancer, human esophageal carcinoma, pancreatic cancer, osteosarcoma) . The observed effect of reversing the MDR phenomenon in the presence of curcuminoids makes it possible to search for new curcumin derivatives and to create combinations with other chemotherapeutic agents. Changes in the curcumin structure consist of the modification of the side chain on the benzene ring, hydrogenation of the seven-carbon chain, replacement of the β-diketone structure with the heterocyclic ring, obtaining complexes, hydrogen substitution of methylene bridge hydrogen, replacement of benzene rings with other aromatic heterocyclic rings. More advanced modifications include the so-called mixed modifications combining all the previous ones . Knowing that curcumin itself exhibits anticancer activity and may additionally help overcome the multidrug resistance of cancer cells by inhibition of the P-gp, Lopes-Rodrigues et al. synthesized a series of curcumin derivatives. First, they were first assessed for their anticancer potential towards K562 (chronic myeloid leukemia) and NCI-H460 cells (non-small cell lung cancer), and their multidrug-resistant analogs overexpressing P-gp, K562Dox, and RH460, respectively. The best activity in this manner, as compared to curcumin, was expressed by 109 and 91 . Interestingly, 109 was more potent in the drug-resistant cells. It is also important to be aware of the interesting and important preventive, especially antioxidant potential of these compounds. This is because ROS are formed by oxidation reactions in organisms. Biochemical imbalances caused by ROS can damage many biological macromolecules, including DNA, RNA, lipids, and proteins, leading to degenerative diseases such as multiple sclerosis, cancer, and Alzheimer’s disease. Curcuminoids exhibit strong antioxidant activity, surpassing α-tocopherol, a well-known natural antioxidant. This potency is attributed to its ability to neutralize harmful ROS, such as superoxide anions and hydroxyl radicals. Curcumin can protect cells from DNA damage caused by lipid peroxides and singlet oxygen and may also impact neurodegenerative diseases and atherosclerosis. Its neuroprotective effects stem from its antioxidant capabilities, as neurodegeneration often results from oxidative damage by ROS and reactive nitrogen species (RNS). Elevated protein oxidation and oxidative DNA damage are common in neurodegenerative conditions. Despite its benefits, curcumin’s poor bioavailability hampers its medicinal use, leading scientists to develop analogs to enhance its antioxidant properties . The antioxidant activities of many curcumin derivatives, like 109 and 110 – 113 , have also been intensively studied . The difficulty in determining the optimal modifications of curcumin lies in its greatest benefit— curcumin does not exert its activity through a single cellular pathway, but it has the ability to simultaneously activate or deactivate several targets. Thus, the observed effects for a given derivative may be a result of tuning the structure to be more specific to a particular target rather than slightly potentiating all pathways. Additionally, curcumin has three motifs that may be responsible for its activity—the β-diketone, substituted aromatic rings, and Michael acceptor fragment, with the latter capable of blurring the picture due to non-specific action. Thus, caution should be taken when interpreting the results of the biological activity of curcumin derivatives. To sum up, by functionalizing the parent curcumin molecule, researchers have obtained more stable and bioavailable compounds with enhanced therapeutic potential, making curcumin derivatives promising candidates for medical applications, including cancer. Apart from the anticancer activity of new curcuminoid derivatives, it is worth paying attention to their high protective potential against free radicals and the possibility of use in combined therapies, especially in cases of multidrug resistance. |
Efficacy of compound betamethasone combined with ropivacaine in iliac fascial space nerve block analgesia | a79e074d-338d-461d-9f65-171379c52f48 | 11814100 | Surgical Procedures, Operative[mh] | Artificial femoral head replacement (AFHR) is primarily indicated for the treatment of femoral head fractures, necrosis, and femoral neck fractures , . The majority of patients undergoing AFHR are elderly and often present with multiple comorbidities, which renders them less tolerant to the risks associated with surgery and analgesia. Moreover, the pain resulting from surgical trauma can trigger a cascade of stress responses in the body, significantly heightening the risk of perioperative complications and delaying recovery , . Iliac fascial space nerve block analgesia (IFNBA) is frequently utilized during AFHR, often in conjunction with subarachnoid analgesia. Ropivacaine, a long-acting amide local anesthetic, is the most commonly used agent in IFNBA due to its superior ability to selectively block sensory nerve fibers over motor fibers. It achieves anesthetic effects by inhibiting sodium ion channels in nerve cells, leading to effective nerve conduction blockade, while presenting lower toxicity to the central nervous system and myocardium, thus ensuring greater safety and controllability – . The use of adjunctive medications has been shown to enhance the efficacy of local anesthetics, accelerate the onset of sensory and motor blocks, and improve analgesic outcomes without significant adverse reactions. Currently, the selection of adjuvant agents for local anesthetics remains a prominent area of research, with dexamethasone being the most commonly employed corticosteroid adjunct in IFNBA – . Compound betamethasone represents a promising alternative as a local anesthetic adjuvant, characterized by its prolonged duration of action, extending beyond four weeks. Its local application effectively reduces inflammatory exudation and enhances local blood circulation, thereby alleviating acute postoperative pain when combined with local anesthetics , . While compound betamethasone has been primarily studied in brachial plexus blocks, paravertebral nerve blocks, and transverse abdominal plane blocks – , its application in lower limb nerve blocks remains underexplored. This study aims to evaluate the anesthetic efficacy of ropivacaine combined with compound betamethasone in IFNBA for patients undergoing AFHR, providing a theoretical foundation for the selection of clinical anesthetic agents. Subjects From January 2022 to June 2022, 70 patients requiring IFNBA analgesia undergoing AFHR surgery at our hospital were included in this prospective study. Using Excel randomization, 70 patients were divided into study group ( n = 35) and control group ( n = 35). The study group received ropivacaine combined with compound betamethasone analgesia regimen, while the control group only received ropivacaine analgesia regimen (Fig. ). This study protocol was formulated in accordance with the requirements of the Declaration of Helsinki of the World Medical Association. It was approved by the Ethics Committee of Chengde Central Hospital (NO. CDCHLL2022-401), and the informed consent forms were obtained from all patients. This study was previously registered at Chinese Clinical Trial Registry (No. ChiCTR2100052214, Date: 22/10/2021). Inclusion and exclusion criteria Inclusion criteria Patients undergoing artificial femoral head replacement (AFHR) surgery. Patients aged over 65 years. Patients classified as American Society of Anesthesiologists (ASA) grades I to III. Patients with a body mass index (BMI) ranging from 19.1 to 28 kg/m². Exclusion criteria Patients with a known allergy to ropivacaine or compound betamethasone. Patients diagnosed with peripheral neuropathy. Patients exhibiting severe dysfunction of the heart, lungs, liver, or kidneys. Patients with inflammation or infection at the injection site. Patients who have previously received opioid analgesics. Patients with a history of mental illness. Patients with malignant tumors. Patients sustaining multiple traumatic injuries. Patients with an inguinal hernia. Patients requesting a postoperative analgesia pump. Patients with diabetes. Treatment protocol All patients received subarachnoid anesthesia combined with ultrasound-guided IFNBA. Upon admission to the operating room, venous access was routinely established, baseline indicators including heart rate (HR), systolic blood pressure (SBP), diastolic blood pressure (DBP), oxygen saturation (SpO 2 ), and electrocardiogram were monitored, and oxygen was administered with a mask. The iliofascial nerve block was carried out using the P300 model produced by Siemens. The ultrasonic probe was placed in the inguinal ligament, and the femoral artery of the tested patient was scanned by ultrasound to carefully identify the iliofascial fascia, deep circumflex iliac artery, iliopsoas muscle, femoral nerve and other tissue structures (Fig. ). The nerve puncture needle was inserted into the skin at an Angle of 30°, and drugs were injected after the puncture needle reached the iliofascial space through ultrasound observation. The study group received 30 ml of mixed anesthetic (including 7 mg betamethasone and the rest 0.4% ropivacaine), while the control group received only 0.4% ropivacaine 30 ml. For ilioinguinal nerve block, the two groups were injected with corresponding local anesthetics respectively, followed by subarachnoid analgesia. Both groups received 2 ml of 0.375% bupivacaine with a low density, and the analgesia level was controlled at T10. The duration of postoperative analgesia and the dosage of analgesic drugs were recorded (Oxycodone and flurbiprofen ester were used as analgesic drugs in both groups, including 1 mg/kg of oxycodone during operation, 1 mg/kg of oxycodone after operation, 200 mg of flurbiprofen ester diluted into normal saline to 100 ml, and the infusion rate was 2 ml/h). Outcome indicators Blind method: The randomization in this study was based on random numbers generated by the randomization sequence in SPSS 22.0 software. Personnel involved in this randomization process did not participate in patient recruitment or drug administration. An anesthesiologist who was not involved in the study was selected from the department to prepare the study drugs in syringes of the same specification according to the randomization groups. Blinding was implemented for patients, surgeons, anesthesiologists participating in the study, follow-up personnel, and data analysts. Primary outcome measures: The VAS pain scores (Fig. ) at rest and during movement were observed and recorded at 6 h, 12 h, 24 h, and 48 h postoperatively. The Ramsay sedation scores (Table ) were recorded before the block and at 6 h, 12 h, 24 h, and 48 h postoperatively in both groups. Secondary outcome measures: The duration of analgesia, patient satisfaction, and changes in inflammatory factor levels before and after surgery in both groups were recorded and compared. Indicators of inflammatory factor levels, including tumor necrosis factor-α (TNF-α) and interleukin-6 (IL-6), were measured and collected before, at the completion of surgery, 24 h and 72 h after surgery. Blood was drawn via the cubital vein. 5 ml of venous blood was placed in a sterile non-anticoagulant tube, and then centrifuged at 3000 rpm for 10 min to collect the supernatant. The levels of TNF-α and IL-6 were determined by enzyme-linked immunoassay kit. The duration of analgesia refers to the time between the end of anesthetic drug injection and the time when the patient feels significant pain in the surgical incision after surgery. Evaluation of postoperative analgesia satisfaction: Patient satisfaction was assessed by the study team according to the Houston Pain Status Questionnaire (HPOI). HPOI mainly includes eight dimensions: pain expectation, pain experience, pain’s impact on emotions, pain’s impact on the body or daily life, satisfaction with pain control or relief methods, satisfaction with pain control education, satisfaction with pain care, and overall satisfaction with pain control. The score for each dimension is 1–10, with a total score of 80. The lower the score, the worse the patient satisfaction. The total score of ≥ 72 is satisfactory, 48–72 is generally satisfactory, and < 48 is unsatisfactory. Satisfaction = (total number of cases - dissatisfied cases)/Total number of cases ×100% . Long-term follow-up index: Patients in both groups were followed up for 3 months after surgery, and adverse reactions such as puncture site infection, superficial surgical site infection, deep surgical site infection, lower extremity venous embolism, pulmonary embolism, nausea and vomiting, and delayed wound healing were recorded in both groups. Sample size calculation Sample size determination was conducted prior to the study using G*Power software (version 3.1.9.4). We aimed to achieve a significance level (α) of 0.05 (two-tailed) and a power (1-β) of 0.90 to detect a clinically significant difference in the visual analogue scale (VAS) pain scores between the two groups. Based on previous studies and pilot data, we estimated an effect size of 0.8 for the difference in VAS scores. Utilizing these parameters, the calculation indicated that a total of 70 patients, with 35 patients in each group, would provide adequate power to detect the anticipated difference. This sample size was deemed sufficient to ensure the robustness of our statistical analyses. Statistical analysis Data collected in this study were analyzed using SPSS version 22.0 software. The normality of continuous variables was assessed using the Shapiro-Wilk test, in conjunction with graphical representations such as histograms and Q-Q plots. Normally distributed measurement data are presented as mean ± standard deviation (SD), while non-normally distributed data are expressed as median (interquartile range). Comparisons between groups were conducted using the Student’s t-test for normally distributed data and the Mann-Whitney U test for non-parametric distributions. Categorical data are reported as n , with differences between the two groups analyzed using chi-square tests or Fisher’s Exact Test, as appropriate. A two-sided significance level of 0.05 was established for all statistical tests. From January 2022 to June 2022, 70 patients requiring IFNBA analgesia undergoing AFHR surgery at our hospital were included in this prospective study. Using Excel randomization, 70 patients were divided into study group ( n = 35) and control group ( n = 35). The study group received ropivacaine combined with compound betamethasone analgesia regimen, while the control group only received ropivacaine analgesia regimen (Fig. ). This study protocol was formulated in accordance with the requirements of the Declaration of Helsinki of the World Medical Association. It was approved by the Ethics Committee of Chengde Central Hospital (NO. CDCHLL2022-401), and the informed consent forms were obtained from all patients. This study was previously registered at Chinese Clinical Trial Registry (No. ChiCTR2100052214, Date: 22/10/2021). Inclusion criteria Patients undergoing artificial femoral head replacement (AFHR) surgery. Patients aged over 65 years. Patients classified as American Society of Anesthesiologists (ASA) grades I to III. Patients with a body mass index (BMI) ranging from 19.1 to 28 kg/m². Exclusion criteria Patients with a known allergy to ropivacaine or compound betamethasone. Patients diagnosed with peripheral neuropathy. Patients exhibiting severe dysfunction of the heart, lungs, liver, or kidneys. Patients with inflammation or infection at the injection site. Patients who have previously received opioid analgesics. Patients with a history of mental illness. Patients with malignant tumors. Patients sustaining multiple traumatic injuries. Patients with an inguinal hernia. Patients requesting a postoperative analgesia pump. Patients with diabetes. Patients undergoing artificial femoral head replacement (AFHR) surgery. Patients aged over 65 years. Patients classified as American Society of Anesthesiologists (ASA) grades I to III. Patients with a body mass index (BMI) ranging from 19.1 to 28 kg/m². Patients with a known allergy to ropivacaine or compound betamethasone. Patients diagnosed with peripheral neuropathy. Patients exhibiting severe dysfunction of the heart, lungs, liver, or kidneys. Patients with inflammation or infection at the injection site. Patients who have previously received opioid analgesics. Patients with a history of mental illness. Patients with malignant tumors. Patients sustaining multiple traumatic injuries. Patients with an inguinal hernia. Patients requesting a postoperative analgesia pump. Patients with diabetes. All patients received subarachnoid anesthesia combined with ultrasound-guided IFNBA. Upon admission to the operating room, venous access was routinely established, baseline indicators including heart rate (HR), systolic blood pressure (SBP), diastolic blood pressure (DBP), oxygen saturation (SpO 2 ), and electrocardiogram were monitored, and oxygen was administered with a mask. The iliofascial nerve block was carried out using the P300 model produced by Siemens. The ultrasonic probe was placed in the inguinal ligament, and the femoral artery of the tested patient was scanned by ultrasound to carefully identify the iliofascial fascia, deep circumflex iliac artery, iliopsoas muscle, femoral nerve and other tissue structures (Fig. ). The nerve puncture needle was inserted into the skin at an Angle of 30°, and drugs were injected after the puncture needle reached the iliofascial space through ultrasound observation. The study group received 30 ml of mixed anesthetic (including 7 mg betamethasone and the rest 0.4% ropivacaine), while the control group received only 0.4% ropivacaine 30 ml. For ilioinguinal nerve block, the two groups were injected with corresponding local anesthetics respectively, followed by subarachnoid analgesia. Both groups received 2 ml of 0.375% bupivacaine with a low density, and the analgesia level was controlled at T10. The duration of postoperative analgesia and the dosage of analgesic drugs were recorded (Oxycodone and flurbiprofen ester were used as analgesic drugs in both groups, including 1 mg/kg of oxycodone during operation, 1 mg/kg of oxycodone after operation, 200 mg of flurbiprofen ester diluted into normal saline to 100 ml, and the infusion rate was 2 ml/h). Blind method: The randomization in this study was based on random numbers generated by the randomization sequence in SPSS 22.0 software. Personnel involved in this randomization process did not participate in patient recruitment or drug administration. An anesthesiologist who was not involved in the study was selected from the department to prepare the study drugs in syringes of the same specification according to the randomization groups. Blinding was implemented for patients, surgeons, anesthesiologists participating in the study, follow-up personnel, and data analysts. Primary outcome measures: The VAS pain scores (Fig. ) at rest and during movement were observed and recorded at 6 h, 12 h, 24 h, and 48 h postoperatively. The Ramsay sedation scores (Table ) were recorded before the block and at 6 h, 12 h, 24 h, and 48 h postoperatively in both groups. Secondary outcome measures: The duration of analgesia, patient satisfaction, and changes in inflammatory factor levels before and after surgery in both groups were recorded and compared. Indicators of inflammatory factor levels, including tumor necrosis factor-α (TNF-α) and interleukin-6 (IL-6), were measured and collected before, at the completion of surgery, 24 h and 72 h after surgery. Blood was drawn via the cubital vein. 5 ml of venous blood was placed in a sterile non-anticoagulant tube, and then centrifuged at 3000 rpm for 10 min to collect the supernatant. The levels of TNF-α and IL-6 were determined by enzyme-linked immunoassay kit. The duration of analgesia refers to the time between the end of anesthetic drug injection and the time when the patient feels significant pain in the surgical incision after surgery. Evaluation of postoperative analgesia satisfaction: Patient satisfaction was assessed by the study team according to the Houston Pain Status Questionnaire (HPOI). HPOI mainly includes eight dimensions: pain expectation, pain experience, pain’s impact on emotions, pain’s impact on the body or daily life, satisfaction with pain control or relief methods, satisfaction with pain control education, satisfaction with pain care, and overall satisfaction with pain control. The score for each dimension is 1–10, with a total score of 80. The lower the score, the worse the patient satisfaction. The total score of ≥ 72 is satisfactory, 48–72 is generally satisfactory, and < 48 is unsatisfactory. Satisfaction = (total number of cases - dissatisfied cases)/Total number of cases ×100% . Long-term follow-up index: Patients in both groups were followed up for 3 months after surgery, and adverse reactions such as puncture site infection, superficial surgical site infection, deep surgical site infection, lower extremity venous embolism, pulmonary embolism, nausea and vomiting, and delayed wound healing were recorded in both groups. Sample size determination was conducted prior to the study using G*Power software (version 3.1.9.4). We aimed to achieve a significance level (α) of 0.05 (two-tailed) and a power (1-β) of 0.90 to detect a clinically significant difference in the visual analogue scale (VAS) pain scores between the two groups. Based on previous studies and pilot data, we estimated an effect size of 0.8 for the difference in VAS scores. Utilizing these parameters, the calculation indicated that a total of 70 patients, with 35 patients in each group, would provide adequate power to detect the anticipated difference. This sample size was deemed sufficient to ensure the robustness of our statistical analyses. Data collected in this study were analyzed using SPSS version 22.0 software. The normality of continuous variables was assessed using the Shapiro-Wilk test, in conjunction with graphical representations such as histograms and Q-Q plots. Normally distributed measurement data are presented as mean ± standard deviation (SD), while non-normally distributed data are expressed as median (interquartile range). Comparisons between groups were conducted using the Student’s t-test for normally distributed data and the Mann-Whitney U test for non-parametric distributions. Categorical data are reported as n , with differences between the two groups analyzed using chi-square tests or Fisher’s Exact Test, as appropriate. A two-sided significance level of 0.05 was established for all statistical tests. Baseline clinical characteristics A total of 70 patients underwent AFHR surgery, with the study group comprising 35 patients (mean age: 64.36 ± 9.08 years; 21 males and 14 females) and the control group comprising 35 patients (mean age: 64.13 ± 9.21 years; 20 males and 15 females). There were no statistically significant differences between the two groups of patients in terms of gender, age, BMI, ASA grades, preoperative static VAS score, preoperative dynamic VAS score, preoperative Ramsay sedation score, preoperative TNF-α levels, and preoperative IL-6 levels. ( P > 0.05) (Table ). Comparison of postoperative VAS Pain scores At 12 and 24 h postoperatively, the static and dynamic VAS pain scores in the study group were significantly lower than those in the control group ( P < 0.001). However, there were no significant differences in static and dynamic VAS pain scores between the two groups at 6 and 48 h postoperatively ( P > 0.05).(Table ). Comparison of Postoperative Ramsay Sedation scores At 6 and 12 h after surgery, the Ramsay sedation scores in the study group were significantly higher than those in the control group ( P < 0.001). No significant differences in Ramsay sedation scores were found between the two groups at 24 and 48 h postoperative ( P > 0.05) (Table ). Comparison of postoperative inflammatory factor levels The levels of TNF-α and IL-6 in the study group were significantly lower than those in the control group at the completion of surgery, as well as at 24 and 72 h after surgery ( P < 0.001) (Table ). Comparison of duration of postoperative analgesia and patient satisfaction The duration of postoperative analgesia and patient satisfaction in the study group were all significantly greater than those in the control group ( P < 0.05) (Table ). Comparison of postoperative adverse reactions Patients in both groups were followed up for 3 months after surgery, and no puncture site infection, lower extremity artery embolism, pulmonary embolism or deep infection occurred. In the study group, superficial surgical site infection occurred in 1 case (2.86%), delayed wound healing in 3 cases (8.57%), nausea and vomiting in 2 cases (5.71%). In the control group, there were 2 cases of superficial infection (5.71%), 2 cases of delayed wound healing (5.71%), and 3 cases of nausea and vomiting (8.57%). There was no statistical significance in the incidence of the above adverse reactions between groups ( P > 0.05) (Table ). A total of 70 patients underwent AFHR surgery, with the study group comprising 35 patients (mean age: 64.36 ± 9.08 years; 21 males and 14 females) and the control group comprising 35 patients (mean age: 64.13 ± 9.21 years; 20 males and 15 females). There were no statistically significant differences between the two groups of patients in terms of gender, age, BMI, ASA grades, preoperative static VAS score, preoperative dynamic VAS score, preoperative Ramsay sedation score, preoperative TNF-α levels, and preoperative IL-6 levels. ( P > 0.05) (Table ). At 12 and 24 h postoperatively, the static and dynamic VAS pain scores in the study group were significantly lower than those in the control group ( P < 0.001). However, there were no significant differences in static and dynamic VAS pain scores between the two groups at 6 and 48 h postoperatively ( P > 0.05).(Table ). At 6 and 12 h after surgery, the Ramsay sedation scores in the study group were significantly higher than those in the control group ( P < 0.001). No significant differences in Ramsay sedation scores were found between the two groups at 24 and 48 h postoperative ( P > 0.05) (Table ). The levels of TNF-α and IL-6 in the study group were significantly lower than those in the control group at the completion of surgery, as well as at 24 and 72 h after surgery ( P < 0.001) (Table ). The duration of postoperative analgesia and patient satisfaction in the study group were all significantly greater than those in the control group ( P < 0.05) (Table ). Patients in both groups were followed up for 3 months after surgery, and no puncture site infection, lower extremity artery embolism, pulmonary embolism or deep infection occurred. In the study group, superficial surgical site infection occurred in 1 case (2.86%), delayed wound healing in 3 cases (8.57%), nausea and vomiting in 2 cases (5.71%). In the control group, there were 2 cases of superficial infection (5.71%), 2 cases of delayed wound healing (5.71%), and 3 cases of nausea and vomiting (8.57%). There was no statistical significance in the incidence of the above adverse reactions between groups ( P > 0.05) (Table ). For elderly patients with femoral head fractures, healing can be challenging, and functional loss is common. Anterior femoral head replacement (AFHR) is an effective surgical treatment for such fractures. However, postoperative pain can be severe, and the short duration of single-injection subarachnoid analgesia and peripheral nerve block analgesia impedes timely early functional exercise, thereby delaying recovery. To address this, we investigated the analgesic efficacy of combining subarachnoid analgesia with ultrasound-guided iliac fascia nerve block (IFNBA) for AFHR surgery – . Our findings demonstrate that IFNBA is not only straightforward to administer but also offers a high success rate in pain relief, significantly improving both acute and chronic postoperative pain while minimizing adverse effects. Previous studies have reported positive outcomes with IFNBA in other types of surgery. For instance, Wang utilized ultrasound-guided IFNBA in patients with intertrochanteric femur fractures, showing more accurate anesthetic effects and improved hemodynamic stability compared to traditional methods . Another study found that combining ultrasound-guided high IFNBA with sacral plexus block in elderly patients with hip fractures led to better analgesia and reduced narcotic use, with minimal cognitive impact . Despite its benefits, local anesthesia can have adverse reactions, such as respiratory depression, nausea, vomiting, and urinary retention. Therefore, extending the analgesic effects of local anesthetics is a growing concern. Ropivacaine is preferred for peripheral nerve blocks due to its high safety profile and effectiveness , . Betamethasone, identified as a promising adjuvant through literature review, has been primarily studied in brachial plexus, paravertebral, and transverse abdominal muscle plane blocks , – . The study results showed that the anesthetic effect of IFNBA combined in iliac fascia block was more lasting, and the analgesic effect was good, and the inflammatory response of patients was also improved to a certain extent. The main components of compound betamethasone are betamethasone sodium phosphate and betamethasone dipropionate. After betamethasone enters the ropivacaine solution, part of it is present in pellet form, and pellet steroids are thought to be a local reserve that gradually decays and releases steroids, leading to a more lasting effect . Other studies have shown that the combination of compound betamethasone and local anesthetics increases the duration of analgesic action by prolonging the half-life of ropivacaine. Betamethasone is easy to bind to extracellular receptors, thus changing the molecular arrangement on the surface of the cell membrane, resulting in membrane obstruction and blocking the entry and exit of certain substrates, metabolites and water, which extends the half-life of ropivacaine in human body to a certain extent . The study results of Zhang et al. showed that when compound betamethasone was added to ropivacaine for analgesia after knee joint replacement, the VAS scores of patients with static and dynamic pain at 12 h and 24 h after surgery were significantly lower than those in the control group, and the number of patients in the study group using flurbiprofen axidate for analgesia was significantly lower than that in the control group, and the satisfaction with analgesia in study group was higher . The results of this study were basically consistent with the above results. Compound betamethasone plus ropivacaine applied to IFNBA resulted in significantly lower static and dynamic VAS pain scores at 12 and 24 h after surgery than that of the control group, and the duration of postoperative analgesia was significantly longer than that in the control group. It can be seen that the compound betamethasone combined with ropivacaine in peripheral nerve block can indeed prolong the time of labor pain and improve the analgesic effect. In addition, the study group had a higher satisfaction rate of analgesia, which was 80%, indicating that the addition of compound betamethasone could effectively improve the postoperative quality of life of patients. Cytokines such as IL-6 and TNF are linked to immune responses, acute inflammation, and chronic inflammation. Compound betamethasone, a long-acting corticosteroid with potent glucocorticoid and mild salocorticoid activities, has substantial anti-inflammatory effects , , . Our study showed significantly lower levels of TNF-α and IL-6 in the study group compared to the control group at surgery completion, and 24 and 72 h after surgery. This reduction may result from compound betamethasone’s ability to decrease capillary permeability and inhibit IL and TNF secretion, thus mitigating inflammation and tissue damage. Additionally, Ramsay sedation scores at 6 and 12 h after surgery were significantly higher in the study group, likely due to the extended anesthetic effects from the combined drug regimen. This study has limitations, including a small sample size that affects result generalizability and the absence of dexamethasone combined with ropivacaine as a control, preventing direct comparison of betamethasone’s superiority. Future research with larger sample sizes is needed to further explore and confirm the clinical advantages of combining ropivacaine with compound betamethasone in IFNBA. For patients undergoing AFHR, the combination of ropivacaine and compound betamethasone in IFNBA offers superior analgesia, higher patient satisfaction, and effective reduction of inflammatory status, warranting its clinical promotion and application. |
Expression of integrin α | ae5b1761-9b09-459c-bd0f-2d37e0016a8b | 11497997 | Anatomy[mh] | Background Medullary thyroid carcinoma (MTC) is a neuroendocrine tumor, derived from the calcitonin-producing parafollicular c-cells of the thyroid. Although MTC accounts for only 1–2% of thyroid carcinomas, it is accountable for 13% of thyroid cancer related deaths . In 75% of cases, MTC occurs sporadically, while it can also occur as part of the hereditary tumor syndrome Multiple Endocrine Neoplasia type 2 (MEN2) . Treatment with curative intent consists of total thyroidectomy and dissection of the central lymph node compartment. However, despite treatment, over half of patients continue to exhibit elevated calcitonin levels, indicating persistent disease. Conventional imaging modalities are inadequate for detecting low tumor marker levels in these cases. Imaging modalities are not sufficient in these patients with low tumor markers. Moreover, possibilities for adjuvant therapy are limited. Consequently, survival rates have not increased significantly in the last decades . Therefore, there is a demand for new imaging and therapeutic options that also target lymph node metastases, which will enable better treatment of patients who present with metastases or rapidly progress. Neuroendocrine tumors are highly vascularized and angiogenesis plays a major role in the development of thyroid tumors. Most current adjuvant treatments, such as tyrosine kinase inhibitors, target angiogenesis pathways. A v β 3 is a target for nuclear imaging and treatment (theranostics), which is also strongly involved in the regulation of angiogenesis . It is largely expressed in neovasculature and tumor cells of various malignancies including melanoma, glioma, breast, pancreas, prostate, lung, head and neck, and gastric cancer . Also, α v β 3 integrin affects tumor growth, local invasion and development of metastases . Arginine-glycine-aspartate (RGD) peptides have high affinity and specificity for the extracellular domain of α v β 3 integrin . Therefore, radiolabeled RGD can be used for imaging of malignancies as well as for subsequent treatment with peptide receptor radionuclide therapy (PRRT). The aim of this study was to determine α v β 3 integrin expression in MTC and its lymph node metastases to assess its suitabilitiy as a nuclear target. Correlation of α v β 3 with clinicopathologic variables and survival was assessed. Materials & methods The same cohort, database and TMA were used as described in our previous research . 2.1. Patients Patients who underwent surgery between 1988 and 2014 for MTC were identified from the pathology databases of five Dutch tertiary referral centers: Leiden University Medical Center (LUMC), Amsterdam University Medical Center (AUMC), Radboud University Medical Center (RUMC), University Medical Center Groningen (UMCG) and University Medical Center Utrecht (UMCU). Formalin fixed paraffin embedded (FFPE) tissues were retrieved from pathology archives. Primary tumor tissue was available from 104 patient for inclusion in the tissue microarray (TMA). Additionally, tissue of lymph node metastases from 27 patients from theLUMC and UMCU was available. Clinical and pathological data was obtained from patient records. Germline mutation analysis of the RET gene was performed to confirm all MEN2 diagnoses. Sporadic patients either had a negative germline mutation analysis or a negative family history. Microscopically detected positive resection margins were not included as a separate variable but incorporated into the T-stage classification. Disease status was based on postoperative calcitonin and CEA serum values. Given the range of assays used across five centers over nearly three decades, no exact values or doubling times were used. CEA or calcitonin level above the, at that time applicable, reference range was considered indicative of persistent disease, while values within normal range was interpreted as cured. Only postoperative CEA and calcitonin values measured more than 6 months after surgery were considered. Necrosis, angioinvasion and desmoplasia were scored on whole slides, on the same FFPE blocks that were used for the construction of the TMA. Necrosis and angioinvasion were scored as absent or present. Desmoplasia was scored as negative, some, moderate or severe. This study was performed according to national guidelines with respect to the use of leftover tissue and approval for this study, including the use of patient data, was obtained from the Institutional Review Board of the UMCU. 2.2. Construction of the tissue microarray An automated machine (TMA grand master, 3D Histech, Budapest, Hungary) was used to create the TMA. Three cores of 0.6 mm were punched from each FFPE block of primary tumor and available lymph node metastases. To ensure that cores were punched from tumor regions, a pathologist (PJvD) identified and marked cell-rich areas on H&E slides. These slides were then scanned and the marked areas were manually circled using TMA software (3D Histech). 2.3. Immunohistochemistry TMA blocks and whole slides were cut at 4 μm and mounted on coated slides. Staining for α v β 3 was carried out manually following protocol: after baking the slides at 60°C for 10 min, slides were deparaffinized in xylene for 10 min and hydrated in a series of 100% ethanol, 70% ethanol and rinsed with demi-water. Hereafter, slides were washed with PBS twice. Endogenous peroxidase was blocked using 3% H 2 O 2 in PBS for 15 min. Antigen retrieval was performed in Tris-EDTA buffer (pH 9) by boiling. Slides were washed with PBS-Tween two-times, then were then incubated with Pierce protein- free T20 (PBS) blocking buffer (PIER37573, Thermo Scientific) and incubated at room temperature in a dark place for 15 min. The primary α v β 3 antibody (1:25 ab7166 mouse monoclonal [BV3], Abcam) was incubated overnight in a dark place at 4°C. Slides were washed with PBS-Tween three-times. Then, a 2 step detection system was used (VWRKC-DPVB110HRP, Immunologic). First, a post-blocking step was performed for 15 min and slides were washed with PBS-tween three-times. Secondly, poly-HRP-anti-mouse/rabbit HRP was added for 30 min; both incubations took place in the dark at room temperature. Slides were washed with PBS-Tween three-times. Bright DAB (VWRKBS04-110, Immunologic) was added and the slides were incubated for 8 min in the dark at room temperature. Slides were washed with tap water, counterstained with 3x diluted Mayer's hemalum solution (1.09249.0500, Sigma-Aldrich), washed with tap water and coverslipped. Tissue of renal cell carcinoma and hemangioma was used as positive controle. As a negative controle, the staining was performed on tissue of renal cell carcinoma and MTC without addition of the primary antibody. 2.4. Scoring of the immunohistochemistry The cores included in the TMA and whole slides were scored by an experienced pathologist (PJvD) and researcher (LHdV) for cytoplasmic and membranous staining, both blinded to clinicopathologic characteristics. Any dDisagreements were resolved through discussion, when necessary with help of a third reviewer (LL). The intensity of cytoplasmic staining was scored as absent (0), weak (1), moderate (2) or strong (3). Membranous staining was scored as present or absent. Staining was considered homogenous if the intensity across various cores was consistent. shows representative scores of all immunostainings. Data on hypoxia inducible factor-1 alpha (HIF-1α), VEGF, glucose transporter 1 (Glut-1), carbonic anhydrase IX (CAIX), microvessel density (MVD) and somatostatin receptor 2A (SSTR2A) was available from previous studies . 2.5. Statistical analysis Categorical data were summarized using frequencies and percentages, while continuous data were summarized using medians and ranges. To enhance the statistical power, categorical data were recoded into dichotomous variables. Grade of desmoplasia was recoded into none-some vs. moderate-severe. Stage was recoded into stage I–III and stage IV. Hereditability was recoded as either sporadic disease or MEN2 syndrome. A v β 3 scorings were transformed into a dichotomous variable, considered positive in case of average intensity of cytoplasmic staining in the scored cores >1 or if membranous staining was present in ≥1 of the scored cores. Overall survival (OS) was defined as time to death from any cause. Progression-free survival (PFS) was defined as time to development of distant metastases or death. Univariate Cox regression survival analysis was performed. Furthermore, Kaplan-Meier survival curves were plotted and significance was calculated using log rank test. All reported p -values were two sided. Analysis was performed using SPSS software, version 25.0 (IBM, Armonk, NY, USA). Patients Patients who underwent surgery between 1988 and 2014 for MTC were identified from the pathology databases of five Dutch tertiary referral centers: Leiden University Medical Center (LUMC), Amsterdam University Medical Center (AUMC), Radboud University Medical Center (RUMC), University Medical Center Groningen (UMCG) and University Medical Center Utrecht (UMCU). Formalin fixed paraffin embedded (FFPE) tissues were retrieved from pathology archives. Primary tumor tissue was available from 104 patient for inclusion in the tissue microarray (TMA). Additionally, tissue of lymph node metastases from 27 patients from theLUMC and UMCU was available. Clinical and pathological data was obtained from patient records. Germline mutation analysis of the RET gene was performed to confirm all MEN2 diagnoses. Sporadic patients either had a negative germline mutation analysis or a negative family history. Microscopically detected positive resection margins were not included as a separate variable but incorporated into the T-stage classification. Disease status was based on postoperative calcitonin and CEA serum values. Given the range of assays used across five centers over nearly three decades, no exact values or doubling times were used. CEA or calcitonin level above the, at that time applicable, reference range was considered indicative of persistent disease, while values within normal range was interpreted as cured. Only postoperative CEA and calcitonin values measured more than 6 months after surgery were considered. Necrosis, angioinvasion and desmoplasia were scored on whole slides, on the same FFPE blocks that were used for the construction of the TMA. Necrosis and angioinvasion were scored as absent or present. Desmoplasia was scored as negative, some, moderate or severe. This study was performed according to national guidelines with respect to the use of leftover tissue and approval for this study, including the use of patient data, was obtained from the Institutional Review Board of the UMCU. Construction of the tissue microarray An automated machine (TMA grand master, 3D Histech, Budapest, Hungary) was used to create the TMA. Three cores of 0.6 mm were punched from each FFPE block of primary tumor and available lymph node metastases. To ensure that cores were punched from tumor regions, a pathologist (PJvD) identified and marked cell-rich areas on H&E slides. These slides were then scanned and the marked areas were manually circled using TMA software (3D Histech). Immunohistochemistry TMA blocks and whole slides were cut at 4 μm and mounted on coated slides. Staining for α v β 3 was carried out manually following protocol: after baking the slides at 60°C for 10 min, slides were deparaffinized in xylene for 10 min and hydrated in a series of 100% ethanol, 70% ethanol and rinsed with demi-water. Hereafter, slides were washed with PBS twice. Endogenous peroxidase was blocked using 3% H 2 O 2 in PBS for 15 min. Antigen retrieval was performed in Tris-EDTA buffer (pH 9) by boiling. Slides were washed with PBS-Tween two-times, then were then incubated with Pierce protein- free T20 (PBS) blocking buffer (PIER37573, Thermo Scientific) and incubated at room temperature in a dark place for 15 min. The primary α v β 3 antibody (1:25 ab7166 mouse monoclonal [BV3], Abcam) was incubated overnight in a dark place at 4°C. Slides were washed with PBS-Tween three-times. Then, a 2 step detection system was used (VWRKC-DPVB110HRP, Immunologic). First, a post-blocking step was performed for 15 min and slides were washed with PBS-tween three-times. Secondly, poly-HRP-anti-mouse/rabbit HRP was added for 30 min; both incubations took place in the dark at room temperature. Slides were washed with PBS-Tween three-times. Bright DAB (VWRKBS04-110, Immunologic) was added and the slides were incubated for 8 min in the dark at room temperature. Slides were washed with tap water, counterstained with 3x diluted Mayer's hemalum solution (1.09249.0500, Sigma-Aldrich), washed with tap water and coverslipped. Tissue of renal cell carcinoma and hemangioma was used as positive controle. As a negative controle, the staining was performed on tissue of renal cell carcinoma and MTC without addition of the primary antibody. Scoring of the immunohistochemistry The cores included in the TMA and whole slides were scored by an experienced pathologist (PJvD) and researcher (LHdV) for cytoplasmic and membranous staining, both blinded to clinicopathologic characteristics. Any dDisagreements were resolved through discussion, when necessary with help of a third reviewer (LL). The intensity of cytoplasmic staining was scored as absent (0), weak (1), moderate (2) or strong (3). Membranous staining was scored as present or absent. Staining was considered homogenous if the intensity across various cores was consistent. shows representative scores of all immunostainings. Data on hypoxia inducible factor-1 alpha (HIF-1α), VEGF, glucose transporter 1 (Glut-1), carbonic anhydrase IX (CAIX), microvessel density (MVD) and somatostatin receptor 2A (SSTR2A) was available from previous studies . Statistical analysis Categorical data were summarized using frequencies and percentages, while continuous data were summarized using medians and ranges. To enhance the statistical power, categorical data were recoded into dichotomous variables. Grade of desmoplasia was recoded into none-some vs. moderate-severe. Stage was recoded into stage I–III and stage IV. Hereditability was recoded as either sporadic disease or MEN2 syndrome. A v β 3 scorings were transformed into a dichotomous variable, considered positive in case of average intensity of cytoplasmic staining in the scored cores >1 or if membranous staining was present in ≥1 of the scored cores. Overall survival (OS) was defined as time to death from any cause. Progression-free survival (PFS) was defined as time to development of distant metastases or death. Univariate Cox regression survival analysis was performed. Furthermore, Kaplan-Meier survival curves were plotted and significance was calculated using log rank test. All reported p -values were two sided. Analysis was performed using SPSS software, version 25.0 (IBM, Armonk, NY, USA). Results 3.1. Clinicopathological variables Baseline characteristics are shown in . One-hundred-and-four patients were included. Patients were aged 10 to 82 years (mean 45.8, SD 16.3). Half of patients were male. The majority of patients had sporadic disease (56.8%), 38.9% MEN2A and 4.2% MEN2B. Patients presented with stage I, II, III and IV in 13.5%, 24.0%, 16.7% and 45.8%, respectively. Tumor size ranged from 4 to 70 mm (mean 25.6 mm, SD 14.8). At time of initial surgery, 63.4% of patients had developed lymph node metastases. 3.2. A v β 3 expression in primary tumor The mean intensity of α v β 3 in all cores containing primary tumor was 1.6 (SD 0.58). Only two patients showed no cytoplasmic α v β 3 expression in one or more cores. The intensity of the scored cores was 0, 1, 2 and 3 in 0.8%, 42.8%, 52.4%% and 4.0%, respectively. Among the 91 patients with multiple cores available for analysis, 71.4% exhibited homogeneous expression throughout the primary tumor. Membranous staining was seen in 28.8% patients. In 75.8% of patients with multiple cores available for analysis, membranous staining was consequently present or absent in all cores. 3.3. A v β 3 expression in primary tumor vs. lymph node metastases The average expression in primary tumor and lymph node metastases for these individual patients is demonstrated in . Tissue of lymph node metastases of 27 patients was available in the TMA. Twenty-three patients had cytoplasmic α v β 3 positive primary tumors. These 23 patients had 29 lymph nodes available for analysis, of which six had negative and 23 had α v β positive cytoplasm. Two of the four patients with α v β 3 negative cytoplasm in the primary tumor had positive cytoplasm in the lymph node metastases. Eleven of the 27 patients had α v β 3 positive membranes in the primary tumor, of which two patients also showed membranous expression in the lymph node metastases. Four patients had negative membranes in the primary tumor but positive membranes in the lymph node metastases. 3.4. Association between α v β 3 expression in primary tumor & clinicopathological variables shows α v β 3 expression in comparison with clinicopathological variables. A v β 3 positive membranes were seen significantly ( p = 0.01) more often in patients with sporadic MTC compared with patients with MEN2 (77.8 vs. 22.2%, respectively). For membranous positivity no other significant variables were found. Patients with lymph node metastases at time of initial surgery had significantly ( p = 0.02) more often α v β 3 positive cytoplasm compared with patients without lymph node metastases (71.0 vs. 29.0%, respectively). A v β 3 positive membranes were seen significantly ( p = 0.01) more often in patients with sporadic MTC compared with patients with MEN2 (77.8 vs. 22.2%, respectively). 3.5. Prognostic value Univariate survival analysis for cytoplasmic and membranous α v β expression was not significant for PFS or OS as is outlined in . In Supplementary Figure S1, Kaplan-Meier survival curves are shown. For cytoplasmic α v β 3 positive vs. negative MTC, 10-year survival rates were 84 and 81% for PFS, and 70 and 64% for OS, respectively. For membranous α v β 3 positivity and negativity, PFS was 70 and 52%, and OS was 84 and 75% after 10 years, respectively. Clinicopathological variables Baseline characteristics are shown in . One-hundred-and-four patients were included. Patients were aged 10 to 82 years (mean 45.8, SD 16.3). Half of patients were male. The majority of patients had sporadic disease (56.8%), 38.9% MEN2A and 4.2% MEN2B. Patients presented with stage I, II, III and IV in 13.5%, 24.0%, 16.7% and 45.8%, respectively. Tumor size ranged from 4 to 70 mm (mean 25.6 mm, SD 14.8). At time of initial surgery, 63.4% of patients had developed lymph node metastases. A v β 3 expression in primary tumor The mean intensity of α v β 3 in all cores containing primary tumor was 1.6 (SD 0.58). Only two patients showed no cytoplasmic α v β 3 expression in one or more cores. The intensity of the scored cores was 0, 1, 2 and 3 in 0.8%, 42.8%, 52.4%% and 4.0%, respectively. Among the 91 patients with multiple cores available for analysis, 71.4% exhibited homogeneous expression throughout the primary tumor. Membranous staining was seen in 28.8% patients. In 75.8% of patients with multiple cores available for analysis, membranous staining was consequently present or absent in all cores. A v β 3 expression in primary tumor vs. lymph node metastases The average expression in primary tumor and lymph node metastases for these individual patients is demonstrated in . Tissue of lymph node metastases of 27 patients was available in the TMA. Twenty-three patients had cytoplasmic α v β 3 positive primary tumors. These 23 patients had 29 lymph nodes available for analysis, of which six had negative and 23 had α v β positive cytoplasm. Two of the four patients with α v β 3 negative cytoplasm in the primary tumor had positive cytoplasm in the lymph node metastases. Eleven of the 27 patients had α v β 3 positive membranes in the primary tumor, of which two patients also showed membranous expression in the lymph node metastases. Four patients had negative membranes in the primary tumor but positive membranes in the lymph node metastases. Association between α v β 3 expression in primary tumor & clinicopathological variables shows α v β 3 expression in comparison with clinicopathological variables. A v β 3 positive membranes were seen significantly ( p = 0.01) more often in patients with sporadic MTC compared with patients with MEN2 (77.8 vs. 22.2%, respectively). For membranous positivity no other significant variables were found. Patients with lymph node metastases at time of initial surgery had significantly ( p = 0.02) more often α v β 3 positive cytoplasm compared with patients without lymph node metastases (71.0 vs. 29.0%, respectively). A v β 3 positive membranes were seen significantly ( p = 0.01) more often in patients with sporadic MTC compared with patients with MEN2 (77.8 vs. 22.2%, respectively). Prognostic value Univariate survival analysis for cytoplasmic and membranous α v β expression was not significant for PFS or OS as is outlined in . In Supplementary Figure S1, Kaplan-Meier survival curves are shown. For cytoplasmic α v β 3 positive vs. negative MTC, 10-year survival rates were 84 and 81% for PFS, and 70 and 64% for OS, respectively. For membranous α v β 3 positivity and negativity, PFS was 70 and 52%, and OS was 84 and 75% after 10 years, respectively. Discussion This study shows that the theranostic target α v β 3 was expressed in cytoplasm in the majority and on the membrane in a minority of MTC. In most cases, α v β 3 positive tumors exhibited homogeneous expression throughout the primary tumor. Survival analysis showed no prognostic value of α v β 3 . While Cheng et al. examined α v β 3 expression in three PTC cell lines using immunofluorescence and showed moderate to high expression on the cell surface ( p = 0.05), immunohistochemical staining of α v β 3 has not been evaluated on thyroid tumors in other series . In pancreas carcinoma, predominantly cytoplasmic staining is observed . Gastric cancer shows mainly membranous staining. In case of strong membranous staining, also some cytoplasmic staining is seen . Brain metastases of lung carcinoma exhibit prominent membranous staining . Prostate cancer displays some cytoplasmic staining, but lacks membranous staining . Our immunohistochemistry results show that α v β 3 is largely expressed in the cytoplasm of MTC rather than in the membrane. Only three cores in the TMA did not express any cytoplasmic α v β 3 while 67.3% of patients were deemed α v β 3 positive using our cut off value. Membranous staining was seen in 28.8% patients. A v β 3 expression and imaging with radiolabeled RGD has not yet been investigated in MTC, nor has treatment with 177 Lu-labeled RGD. However, imaging and treatment with radiolabeled RGD has been investigated in differentiated thyroid carcinoma (DTC). Zhao et al. described uptake of radioactive iodine (RAI) refractory metastatic lesions in ten DTC patients on 99m Tc-3PRGD 2 SPECT imaging . Vatsa et al. presented a case of RAI and 18 F-FDG non avid papillary thyroid carcinoma (PTC), in which 68 Ga-DOTA-RGD 2 was able to depict cervical lymph node metastases . Parihar et al. compared 68 Ga-DOTA-RGD 2 to 18 F-FDG PET/CT in 44 patients with RAI-refractory DTC and found a similar sensitivity but a significantly higher specificity of 68 Ga-DOTA-RGD 2 , especially for lymph node metastases . Furthermore, they have reported results suggesting response to 177 Lu-DOTA-RGD 2 treatment with a follow-up time of four months in a single DTC patient with uptake in the thyroid remnant, cervical and mediastinal lymph nodes, bone lesions and lung nodules on 68 Ga-DOTA-RGD 2 PET/CT . In our analysis, a distinction was made between patients with cytoplasmic and membranous expression. RGD binds to the extracellular domain of the α v β 3 integrin . Therefore, membranous expressions is interesting for theranostic purposes and should be the expression to focus on in further research. Patients with sporadic MTC had significantly more often α v β 3 positive membranes. Hence, this subgroup of patients, though small, may benefit more from imaging with radiolabeled RGD and may be more eligible for PRRT, especially when curative surgery is no longer possible. It is plausible that patients with more abundant membranous α v β 3 expression show more uptake on RGD imaging. However, this has not been studied in thyroid cancer or other tumors. Further research on the relation between immunohistochemical α v β 3 expression and uptake of radiolabeled RGD is therefore needed. A v β 3 integrin has a strong effect on angiogenesis and is associated with tumor growth, tumor invasion and development of metastases in various malignancies, which are all prognostically relevant . Our results show a correlation between cytoplasmic expression and having lymph node metastases at time of the primary surgery, which is in line with results on pancreas cancer . Furthermore, the expression of α v β 3 was correlated with bone metastases in prostate and breast carcinoma . Further research is needed to investigate whether α v β 3 is also correlated with distant metastases in MTC. A correlation with tumor size was not seen in our study, contrary to results of studies describing tumor growth and proliferation in ovarian cancer . In cervical cancer, α v β 3 is significantly correlated with decreased survival . This in contrast with the findings of Böger et al., which showed a significantly increased survival for patients with α v β 3 positive gastric cancer . In our study, survival analysis showed no significant results. A strength of this study is the relatively large sample size of 104 patients, considering the rarity of MTC. Another strength is the long follow-up time (mean 68.9 months, range 0–318 months), which is essential since MTC has low proliferative activity and low event rates. Furthermore, for the first time immunohistochemical α v β 3 data was combined with clinical end points such as the development of distant metastases and death. Most limitations of this study are a result of the retrospective design and the low incidence of MTC. To assess a substantial amount of data, patients were included from five tertiary referral centers comprising almost thirty years. As a consequence, variables which were consistent over time and between centers had to be used in our analysis and our follow-up ranges widely. Over the years, surgical guidelines have changed and surgical techniques may have differed between centers. A subanalysis of progressive patients would have been of added value, but was not possible due to the sample size. For future research involving a larger cohort, it would be interesting to use a more extensive IHC scoring system such as the immunoreactive score (IRS). Conclusion To conclude, α v β 3 seems to be frequently expressed in the cytoplasm and less often on the membranes of MTC cells. For future research, implementing a more extensive IHC scoring system such as the IRS would be advisable. Also, the correlation of immunohistochemical α v β 3 expression and uptake of radiolabeled RGD should be further assessed in patients with Membranous α v β 3 expression. Supplementary Figure S1 |
Does hematology rotation impact the interest of internal medicine residents in considering hematology as a career? | f501676a-6161-412e-82dc-05854d3aac62 | 10909288 | Internal Medicine[mh] | There is a growing unmet demand for hematologists worldwide . The local situation is not an exception as the number hematologists in Saudi Arabia is 4 per million people . Although this ratio is better than in lower- and middle-income countries where the number of hematology specialists is less than 1 per million people it is significantly lower than in western high-income countries like Canada where the ratio is 13 per million people . The shortage in hematologists calls for finding strategies that make more physicians specialize in hematology. Different subspecialty training is known to have variable effects and outcomes. This is attributed to multiple factors including the setting of training, workload, knowledge gained, psychological stress, involvement in research and physician lifestyle . Exposure to medical branches during graduate training is one of the strong factors that impact the interest of trainees in pursuing specialties . As per the National Training Authority of Saudi Arabia, Internal Medicine residents must go through at least 8 weeks of mandatory Clinical Hematology rotation and the management of both malignant and non-malignant hematological conditions is part of the internal medicine curriculum . This hematology rotation represents an opportunity to attract physicians in training to the hematology field. However, the contrary may happen, and a specialty rotation may have a negative impact on the interest. A study found that an inpatient hematology-oncology rotation is associated with a decreased interest in the oncology career . The aim of this study was to determine the impact of hematology rotation on the interest of internal medicine residents in considering hematology as a career and possible factors that may impact their interest. This prospective observational study was conducted in the period from December 2019 to May 2021 at King Saud Medical City (KSMC), Riyadh, Saudi Arabia. Participants and setting Participants were internal medicine residents from different institutions performing their hematology rotations at the hematology unit of KSMC. Residents who performed a prior hematology rotation were not excluded. The Saudi board program of internal medicine consists of four years of full-time residency training in internal medicine and its branches. It is divided into two levels: junior level (R1 – R2) and senior (R3 – R4), each consisting of 2 years of training. It is mandatory for internal medicine residents to go through an at least 8-week clinical hematology rotation during residency. This can be either as an 8-week one block or been divided into shorter periods. The hematology unit is part of the Hemato-Oncology Department of KSMC which is a 1250-bed central tertiary care hospital. The unit is responsible for the investigation and management of adult patients with both benign and malignant hematological disorders. The service is provided through an inpatient unit, consultation team, outpatient clinics, and on-call emergency duty. The survey and other data collected Before and after the hematology rotation residents were asked to complete an anonymous questionnaire in which they rate on a 0 to 10 scale the following statements regarding hematology specialty: “Consider hematology as a career” (0 = never, 10 = strongly agree), “Manageable workload” (0 = intolerable, 10 = very manageable), “Comfort in dealing with cancer patients” (0 = totally uncomfortable, 10 = very comfortable), and “Perception of hematologist lifestyle” (0 = totally unsatisfactory, 10 = very satisfactory). In addition, in the post-rotation questionnaire, the residents were asked to rate on a 0 to 10 scale teaching/training by hematology staff (0 = unsatisfactory, 10 = very satisfactory), gaining knowledge in general hematology (anemia, hemoglobinopathies, bone marrow failure,.) hematological malignancies, bleeding and thrombosis, and emergencies (0 = none, 10 = excellent), and usefulness in preparation for the internal medicine board exam (0 = not useful at all, 10 = very useful). The following data were also collected: age, gender, internal medicine training center, level of training, prior rotation in hematology, and its level, type, and duration. Statistical analysis Categorical data were described as numbers and percentages and continuous data as mean with standard deviation (SD) or median with interquartile range (IQR) as appropriate. The normal distribution of continuous variables was tested using the Shapiro-Wilk test. The Wilcoxon test for paired samples was used to compare the pre- and post-rotation 0–10 scale ratings. The correlation between considering hematology as a career and other ratings was tested using Spearman’s rank correlation test. A p-value < 0.05 was considered significant. Statistical tests were performed using MedCalc® Statistical Software version 22.009 (MedCalc Software Ltd, Ostend, Belgium). Participants were internal medicine residents from different institutions performing their hematology rotations at the hematology unit of KSMC. Residents who performed a prior hematology rotation were not excluded. The Saudi board program of internal medicine consists of four years of full-time residency training in internal medicine and its branches. It is divided into two levels: junior level (R1 – R2) and senior (R3 – R4), each consisting of 2 years of training. It is mandatory for internal medicine residents to go through an at least 8-week clinical hematology rotation during residency. This can be either as an 8-week one block or been divided into shorter periods. The hematology unit is part of the Hemato-Oncology Department of KSMC which is a 1250-bed central tertiary care hospital. The unit is responsible for the investigation and management of adult patients with both benign and malignant hematological disorders. The service is provided through an inpatient unit, consultation team, outpatient clinics, and on-call emergency duty. Before and after the hematology rotation residents were asked to complete an anonymous questionnaire in which they rate on a 0 to 10 scale the following statements regarding hematology specialty: “Consider hematology as a career” (0 = never, 10 = strongly agree), “Manageable workload” (0 = intolerable, 10 = very manageable), “Comfort in dealing with cancer patients” (0 = totally uncomfortable, 10 = very comfortable), and “Perception of hematologist lifestyle” (0 = totally unsatisfactory, 10 = very satisfactory). In addition, in the post-rotation questionnaire, the residents were asked to rate on a 0 to 10 scale teaching/training by hematology staff (0 = unsatisfactory, 10 = very satisfactory), gaining knowledge in general hematology (anemia, hemoglobinopathies, bone marrow failure,.) hematological malignancies, bleeding and thrombosis, and emergencies (0 = none, 10 = excellent), and usefulness in preparation for the internal medicine board exam (0 = not useful at all, 10 = very useful). The following data were also collected: age, gender, internal medicine training center, level of training, prior rotation in hematology, and its level, type, and duration. Categorical data were described as numbers and percentages and continuous data as mean with standard deviation (SD) or median with interquartile range (IQR) as appropriate. The normal distribution of continuous variables was tested using the Shapiro-Wilk test. The Wilcoxon test for paired samples was used to compare the pre- and post-rotation 0–10 scale ratings. The correlation between considering hematology as a career and other ratings was tested using Spearman’s rank correlation test. A p-value < 0.05 was considered significant. Statistical tests were performed using MedCalc® Statistical Software version 22.009 (MedCalc Software Ltd, Ostend, Belgium). Of 62 IM residents, 60 completed the pre- and post-rotation questionnaires during their hematology rotation at KSMC (response rate: 96.8%). Their characteristics are illustrated in Table . The majority (80%) were in the age range of 25–29 years and 73% were males. Almost two-thirds were in a senior level of training and the internal medicine training center was KSMC in 75%. 40% of residents had a prior hematology rotation, mainly (50%) as a part of their internal medicine residency training. The average duration of hematology rotation was 5.4 (SD: 1.8) weeks. The duration was 4 weeks in 37 (61.7%) of residents, 6 weeks in 4 (6.7%) and 8 weeks in 19 (31.7%). Table shows the post-rotation satisfaction with teaching/training and knowledge gained on a 0 to 10 scale (0 = not satisfied at all and 10 = very satisfactory). Residents were overall satisfied with the explored items. The pre- and post-rotation residents’ perception of the hematology career is shown in Table . There was a significant increase in all assessed perception items including considering hematology as a career. The subgroup analysis of the change in considering hematology as a career following the hematology rotation is detailed in Table . The difference was statistically significant in all subgroups except in older (> 29 years) residents and those with prior hematology rotation or performing the rotation during the R1-R2 (junior) level. There was no significant difference in the change in considering hematology as a career according to the duration of the hematology rotation during which the study was conducted (Kruskal-Wallis test statistic = 0.1259, p = 0.937). Similarly, the correlation was not significant between the change in considering hematology as a career and the satisfaction of residents with teaching and knowledge gained during the hematology rotation. On the other hand, there was highly significant positive correlation between the change in considering hematology as a career and the change in the perception of workload manageability, comfort in dealing with cancer patients and hematologist lifestyle (Spearman’s rho = 0.404, 0.603 and 0.514; and p = 0.0014, < 0.0001 and < 0.0001; respectively). To the best of our knowledge, this is the first study to explore the impact of a hematology rotation on the interest of internal medicine residents in considering a hematology career. We found that hematology rotation is associated with a significant increase in interest in considering hematology as a career. Although no studies with a similar design exist for comparison, there is evidence supporting that hematology/oncology rotation is associated with pursuing a hematology-only career. In a study that included 626 hematology/oncology fellows in the United States, completing hematology/oncology rotations during internal medicine or pediatric clerkships was significantly ( p = 0.01) positively associated with hematology-only career plans . In the same study, fellows who had a hematology-only career plan were significantly ( p < 0.01) more likely to report being encouraged to pursue a hematology career and having a clear vision of the hematology career path . In another study, clinical experience during training and more exposure to role models/mentors had a significant effect on the choice of practicing non-malignant hematology among hematology-oncology fellows . This further supports the findings of our study. On the other hand, in the study conducted by McFarland et al. , an inpatient hematology-oncology rotation was associated with a decrease in the interest of internal medicine residents in pursuing a hematology-oncology career. It should be noted that the later study was conducted in a single institution specialized in cancer care and the rotation was performed in a ward that admits oncology and malignant hematology patients, but not benign hematology. Since the hematology rotation is already mandatory for internal medicine residents in Saudi Arabia, it is not possible to know the exact impact of this rotation on becoming hematologists. However, it is important to identify factors that may influence the attraction of more physicians to hematology careers. The number of variables explored in this study was limited; however, some of them correlated significantly with residents’ interests. The timing of performing the hematology rotation was one of these factors. There was no significant increase in interest among junior residents who were performing the rotation during the 1st two years of residency (R1-R2). While the increase in interest was highly significant among senior residents (R3-R4). Recent research found that specialties’ rotation schedule has an impact on career decisions among medical students . In our setting, by the time senior residents (R3-R4) perform their rotations in hematology, they have already performed a good number of other subspecialty rotations that may have impacted their view about hematology. Another subgroup that did not show a significant increase in interest is the older (> 29 years) residents. These results suggest that performing the hematology rotation by younger residents during the 3rd -4th year of residency may increase the interest of residents in considering a hematology career. There was no significant change in the interest of residents with prior hematology exposure in the current study. Those with prior rotation have already lived the experience of hematology rotation which may have resulted in a level of interest that did not change with further exposure. Subspecialty rotations during internal medicine residency may be an opportunity to attract physicians to subspecialties with shortages. This is not limited to the hematology subspecialty as demonstrated in this study. A study that assessed the interest of internal medicine residents in a hepatology career before and after an inpatient hepatology rotation found a significant post-rotation increase in residents’ interest in hepatology . A limitation of the study is that it is a single-institution one. Future studies including other training centers are needed to explore variables that may differ from one institution to another, such as workload. Another limitation is that it was not possible to know the impact of the change in interest on joining the hematology career because the questionnaire was anonymous. Also, factors found to impact residents’ attitudes towards considering a hematology career like mentorship and research experiences , and other possibly relevant factors like comfort in dealing with benign hematology patients were not explored. In a single Saudi institution, hematology rotation was associated with a significant increase in the interest of internal medicine residents in considering a hematology career. Finding factors that may influence this interest in larger studies including other training centers is important to meet the needs for hematology specialists. |
Efficacy and safety of subthreshold micropulse laser in the treatment of central serous chorioretinopathy accompanied by choroidal hemangioma: a case report | 611fd897-b177-474e-898c-6793d93c3d2b | 11817732 | Surgical Procedures, Operative[mh] | Choroidal hemangioma is a congenital, benign vascular tumor that could manifest in circumscribed and diffused forms . The circumscribed choroidal hemangioma (CCH) is a well-defined solitary lesion without systemic manifestations . The CCH is usually asymptomatic and often diagnosed in adulthood during a routine eye examination. Exudative or symptomatic CCH requires treatment to prevent sight-threatening vision loss . Central serous chorioretinopathy (CSCR) is a chorioretinal disease associated with vision loss, primarily due to macular subretinal fluid (SRF) leakage. The disorder is characterized by serous macular detachment in the active disease stage . In CSCR, the leakage of SRF is linked to defects in the outer blood-retina barrier of the retinal pigment epithelium (RPE) layer, which occurs secondary to choroidal abnormalities and dysfunction . Although the subretinal fluid usually resolves spontaneously with minimal sequelae in CSCR, some have a poor visual prognosis and recurrence. The treatment of CSCR has been controversial due to the lack of large prospective randomized treatment trials and relatively few large retrospective studies. Recently, it has been suggested that patients with CCH are predisposed to CSCR . Subthreshold micropulse laser (SML) is a promising treatment approach due to its low treatment cost and repeatability without destruction of the retinal tissue . However, Limited studies exist on the safety and efficacy of SML in treating CSCR with CCH. In this study, we presented our experience with a consecutive patient with CSCR and accompanying CCH treated with SML. A 59-year-old male patient was referred to our clinic due to a six-month history of blurred vision in his right eye. His history was unremarkable. The best-corrected visual acuity (BCVA) for the patient was 0.6 in the right eye and 1.0 in the left eye. The intraocular pressure (IOP) in the right and left eyes was 16 mmHg and 12 mmHg, respectively. There are no obvious abnormalities in the left eye. A slit-lamp examination revealed that the anterior segment of the right eye was normal. Color fundus photography of the right eye indicated located focal RPE changes, orange pigmentation temporal to the macula, and a slight bulge superior to the optic disk (Fig. A). Fundus autofluorescence (FAF) alterations were also detected in the macula (Fig. B). The optical coherence tomography (OCT) of the right eye showed a “saw-tooth” pattern in retinal pigment epithelium (RPE), persistent subretinal fluid (SRF) in the central macular areas, and an abrupt domeshaped hyporeflective mass in the choroid (Fig. C). A solid lesion characterized by high internal reflectivity was detected through B-scan ultrasonography (USG) (Fig. D). Fundus fluorescein angiography (FFA) and indocyanine green angiography (ICGA) confirmed the presence of CSCR and accompanying CCH (Fig. E and F). The subfoveal choroidal thickness (SFCT) was 428 μm, retinal thickness in the macula was 320 μm, and tumor thickness was 499 μm, as measured by OCT (Table ). The 577 nm laser micropulse mode, with a spot size of 140 μm, an exposure time of 0.2 s, and a 5% duty cycle, was used to treat CSCR. The patient achieved anatomic success with complete resolution of SRF one month after treatment. Three and six months after SML, the patient presented for follow-up, and the BCVA in the right eye had improved to 0.7 and 0.9, respectively (Table ). The total resolution of the SRF was revealed at 3 and 6 months after SML (Figs. A-C and A-C). While no significant changes were found in the size of CCH (Fig. B and D, and 3B and D). On the FFA and ICGA, mottled high fluorescence lesions or macular focal leakage were not detected in the central macular areas of the right eye (Figs. E and E). Furthermore, in the early and middle phases of ICGA at 3 and 6 months of follow-up, a large elliptical area with a high fluorescence lesion was also observed in the choroid superior to the disc in the right eye, with no significant changes compared to the initial presentation (Figs. F and F). Additionally, the retinal thickness in the macula was reduced to 212 μm, and the SFCT and the tumor thickness remained unchanged after SML treatment (Table ). The patient provided the written informed consent for publication. A case of CCH and CSCR in the same eye was documented. In this instance, SML treatment was administered for CSCR, resulting in a dry macula, while no specific treatment was provided for CCH. The diagnosis of CCH is particularly challenging in chronic lesions, and CSCR is documented as the most probable non-tumor etiology among the suspected diagnoses, given the potential association of CCH with SRF . To accurately diagnose chronic CCH and coexisting CSCR, the OCT, FFA, and ICGA were employed in the assessment. Some previous studies have proposed a potential link between CCH and CSCR, as both conditions share a common pathophysiological mechanism involving the hyperpermeability of choroidal vessels . CSCR is believed to arise from a pachychoroid-driven process characterized by choroidal congestion and choroidal hyperpermeability leading to choroidal thickening and dilated choroidal vessels . A similar increase in SFCT has been observed in the same and following eyes of CCH patients, indicating generalized, bilateral changes in the choroid . Moreover, RPE alterations were detected in 20% of fellow CCH eyes . CSCR has been proposed to be associated with fluid leakage from the altered RPE, an active movement of fluid from the choroid through the RPE into the subretinal space. A pathological state characterized by elevated hydrostatic pressure within the choroid, resulting from choroidal hyperpermeability and pachychoroid disease, may contribute to the dysfunction of the RPE pump, which is correlated with focal or multifocal RPE leakage . This widespread disturbance in the choroidal vascular organization, characterized by increased SFCT, hypercongestion, and hyperpermeability, may contribute to the development of CSCR in the context of CCH . Considering this novel association, it is recommended that patients with CCH be evaluated for CSCR and vice versa. SML is a newly adopted treatment modality for retinal diseases . However, clear recommendations for its use in specific clinical entities have not yet been formulated; and its application is based on the results of published studies and the surgeon’s discretion. SML, which stimulates the RPE without damaging the adjacent neuroretina, facilitates the repair of the inner blood-retinal barrier, the restoration of the RPE blood-retinal barrier, and increased retinal cell adhesion . Specifically, treatment with SML promotes the expression of heat shock protein (HSP) and vascular endothelial growth factor (VEGF) inhibitors , thus restoring cellular function in the RP and regulating permeability factors to improve RPE pump function . Early initiation of treatment has been suggested to yield better functional outcomes . While previous studies have reported relatively high rates of SRF resolution following SML, only moderate improvements in vision were observed in most cases of chronic CSCR . In the present study, complete resolution of SRF is achieved three months after SML, and BCVA increased to 0.9 at the six-month follow-up. The proportion of subretinal fluid-resolved patients with CSCR increases at the 12-month follow-up compared to the 6-month follow-up, suggesting that SML exhibits long-term outcomes in the management of CSCR (Table ) . Given that both CSCR and CCH are part of the pachychoroid spectrum of diseases, SFCT may also be associated with the response to SML treatment. In our study, SFCT did not change following SML treatment, which is consistent with previous findings . However, Recent studies, including ours, indicate that the treatment of SML can also lead to alterations in the choroid and choriocapillaris , such as choroid vascular density . It is important to note that not all the patients responded to SML, especially the long-standing cases with compromised retinal morphology and significantly increased choroidal thickness . ICG-guided half-dose photodynamic therapy (PDT) is widely regarded as the gold standard for the management of CSCR. In comparison to PDT, SML is superior in improving both visual and anatomical outcomes at the six-month mark in one study . A meta-analysis further indicates that SML significantly improves the BCVA in comparison to PDT, while PDT demonstrates greater efficacy than SML in reducing SFCT . A recent study indicates that PDT has a more significant effect on the choroid-choriocapillaris than SML. Specifically, the PDT group exhibited an increase in choriocapillaris flow voids (CCFV), while the SML group demonstrated a reduction in CCFV . Interestingly, the number of flow deficits has been found to increase in patients receiving PDT in comparison to those undergoing SML . This phenomenon may be attributed to choriocapillaris occlusion following PDT, which is triggered by platelet aggregation and inflammatory substances resulting from oxidative materials generated by light exposure and verteporfin . Nevertheless, SML is considered a cost-effective, readily available alternative to PDT that does not involve the intravenous administration of a photosensitive drug with potential systemic side effects. Thus, SML may be considered a viable competitive alternative to PDT for the treatment of CSCR. No correlations were found between the response to SML treatment and the size of CCH or the recurrence of CSCR, suggesting that SML may be appropriate for treating CSCR associated with CCH. Generalized, diffuse choroidal alterations and hyperpermeable states are likely shared underlying pathophysiologies for both CCH and CSCR, allowing for the conceivable coexistence of CCH and CSCR in the same eye. Furthermore, SML demonstrated a favorable effect in reducing SRF and improving BCVA in patients with CSCR and concurrent CCH during the 6-month follow-up. However, additional studies involving more patients are necessary to confirm the efficacy and safety of SML in the treatment of CSCR accompanied by CCH. |
Molekularpathologie bei urologischen Tumoren | c19c4833-7c04-4591-8988-f4202a1240be | 8084837 | Pathology[mh] | Molekulare Subtypisierung, TERT - und FGFR3 -Alterationen Basierend auf der Expression von „luminalen/urothelialen“ und „basalen/squamösen“ Markern, der inflammatorischen Aktivierung und der Aktivität des Zellzyklus wurden in einer Konsensusklassifikation kürzlich 6 molekulare Subtypen des Urothelkarzinoms („luminal-papillär“, „luminal nicht spezifiziert“, „luminal instabil“, „stromareich“, „basal-squamös“ und „neuroendokrin-ähnlich“) definiert . Der Großteil der muskelinvasiven Urothelkarzinome kann so entweder dem „luminalen“ oder dem „basal-squamösen“ Subtyp zugeordnet werden. Nur ein kleiner Teil wird aufgrund der Expression von neuroendokrinen Genen dem „neuroendokrin-ähnlichen“ Subtypen zugeordnet. „Luminal-papilläre“ Subtypen zeigen das längste (60 % nach 5 Jahren) und „neuroendokrin-ähnliche“ Subtypen das kürzeste Gesamtüberleben (15 % nach 5 Jahren) . Die Datenlage bezüglich einer prädiktiven Wertigkeit der molekularen Subtypen im Hinblick auf das Ansprechen auf eine neoadjuvante Chemotherapie ist jedoch kontrovers und wurde an anderer Stelle in dieser Zeitung ausführlich diskutiert . Zudem bestehen hohe technische Anforderungen bei Fehlen von etablierten immunhistochemischen Surrogatmarkerpanels, sodass die molekulare Subtypisierung noch keinen Eingang in die gegenwärtige Routinediagnostik gefunden hat. Mutationen im Promotor des TERT -Gens, welches für eine katalytische Untereinheit der Telomerase codiert, kommen in etwa 60–80 % aller Urothelkarzinome vor und stellen eine frühe genetische Aberration in der Karzinogenese dar . In benignen Läsionen wie reaktiven Urothelläsionen oder dem invertierten Papillom sind TERT -Mutationen nur sehr selten nachweisbar, so dass eine Mutationsanalyse in schwierigen Fällen hilfreich sein kann . Vor kurzem wurde in den USA der FGFR-Inhibitor Erdafitinib bei Patienten mit fortgeschrittenem/metastasiertem Urothelkarzinom zugelassen . Als Voraussetzung müssen spezifische FGFR3- oder FGFR2 -Alterationen nachgewiesen sein und ein Progress unter oder nach platinhaltiger Chemotherapie vorliegen. Parallel wurde eine Companion-diagnostic-Testung zur Detektion der FGFR3- oder FGFR2 -Alterationen zugelassen . FGFR3 -Mutationen oder Translokationen finden sich in etwa 15 % der muskelinvasiven Urothelkarzinome, deutlich häufiger sind sie jedoch in den nichtinvasiven papillären Karzinomen (75 %) . FGFR3 -Alterationen sind bei nichtmuskelinvasiven Urothelkarzinomen mit einer geringeren Progressionsrate zum muskelinvasiven Urothelkarzinom und einer besseren Prognose assoziiert . Es bestand Konsens, dass derzeit eine Integration des FGFR3 -Status in die Tumorgraduierung oder die klinische Entscheidungsfindung verfrüht wäre. Eine Zulassung von Erdafitinib in Deutschland besteht zum aktuellen Zeitpunkt nicht (Stand Oktober 2020). Molekulare Biomarker in der Urinzytologie und Liquid-Biopsy-Diagnostik Während für die Detektion eines High-grade-Urothelkarzinoms und eines Carcinoma in situ in der Urinzytologie eine hohe Sensitivität besteht, schließt ein negativer Befund ein Low-grade-Urothelkarzinom nicht aus. Vorteile uringebundener Biomarker liegen in der Erhöhung der Sensitivität für die Detektion von High-grade- und Low-grade-Urothelkarzinomen . Erstere zeigen typische chromosomale Aberrationen wie Aneuploidien für die Chromosomen 3, 7, 17 sowie einen Verlust von 9p21, welche mittels FISH-Diagnostik nachgewiesen werden können (UroVysion-Test, Abbott Laboratories, Abbott Park, IL, USA) . Für den molekularen Nachweis von Low-grade-Urothelkarzinomen können häufige und typische Genmutationen wie TERT - oder FGFR3- Mutationen herangezogen werden . In der Nachsorge bestünde bei höherer Sensitivität der molekularpathologischen Untersuchungen das Szenario eines rein molekularen Rezidivs im Urin ohne Nachweis einer positiven Zytologie („antizipatorisch positiv“). Derzeit wird aber eine routinehafte Bestimmung uringebundener molekularer Biomarker in der Nachsorge nicht empfohlen. Es gibt auch Fortschritte auf dem Gebiet der sog. Liquid-Biopsy-Diagnostik beim Urothelkarzinom. Der Nachweis von zirkulierenden Tumorzellen (CTCs) im Blut von Patienten mit Urothelkarzinom korreliert mit einem aggressiveren Krankheitsverlauf . Dasselbe gilt für den Nachweis zirkulierender Tumor-DNA (ctcDNA) . Daneben könnte zukünftig eine FGFR -Mutationsanalyse als Voraussetzung für eine Therapie mit Erdafitinib an ctcDNA durchgeführt werden. Trotz dieser vielversprechenden Entwicklungen bleibt die Liquid-Biopsy-Diagnostik vorerst auf Studien begrenzt. Seltene Histologische Varianten des Urothelkarzinom Die Diagnose seltenerer histologischer Subtypen (mikropapilläres, plasmazytoides oder neuroendokrines Karzinom) ist relevant, da diese histologischen Varianten aggressiver sind, was ggf. auch ohne Nachweis einer Muskelinvasivität eine Frühzystektomie nahelegt. Bei der plasmazytoiden Variante kommt es häufig zu einem E‑Cadherin-Verlust, welcher meist durch Mutationen im CDH1 -Gen bedingt ist und das diskohäsive Wachstum dieser aggressiven Variante erklärt . Eine E‑Cadherin-Immunhistochemie ist zur Diagnose nicht erforderlich, kann in Einzelfällen aber zur Unterscheidung von artifiziellem diskohäsivem Wachstum hilfreich sein. Bei der mikropapillären Variante findet sich gehäuft (ca. 30 %) eine HER2 -Amplifikation mit HER2-Überexpression, welche ggf. neue Therapieoptionen eröffnet . Dennoch wird aufgrund der noch unzureichenden Datenlage vorerst keine routinehafte Testung empfohlen. Das kleinzellige Karzinom der Harnblase, welches grundsätzlich anders therapiert wird, sollte mittels immunhistochemischer neuroendokriner Marker bestätigt werden. Da auch konventionelle Urothelkarzinome einen neuroendokrinen Immunphänotyp zeigen können, das Ansprechen auf eine neoadjuvante Chemotherapie für diese Tumoren aber unklar ist, sollten nur bei konventionell-morphologischem Bild eines kleinzelligen Karzinoms die immunhistochemischen Zusatzuntersuchungen erfolgen . Metastasiertes Urothelkarzinom und Immuntherapien In etwa 20 % der Fälle findet sich beim lokal fortgeschrittenen und metastasierten Urothelkarzinom ein Therapieansprechen auf Checkpointinhibitoren, welche als Erst- und Zweitlinientherapie Einsatz finden . In Deutschland liegt eine Zulassung für Pembrolizumab und Atezolizumab in der Erstlinientherapie nur bei cisplatinungeeigneten Patienten vor, deren Tumorgewebe einen IC-Score von ≥5 % (Atezolizumab) oder einen CPS-Score von ≥10 (Pembrolizumab) aufweist. In der Zweitlinientherapie besteht zudem eine Zulassung für Nivolumab, wobei eine verpflichtende PD-L1-Immunhistochemie in der Zweitlinientherapie für alle 3 Therapeutika entfällt. Die ISUP empfiehlt eine routinehafte Testung des PD-L1-Status beim metastasierten Urothelkarzinom, übertragen auf Zulassungssituation in Deutschland ist entsprechend eine interdisziplinäre Absprache zu empfehlen. Weniger als 1 % der Urothelkarzinome der Harnblase, aber etwa 20 % der Urothelkarzinome der oberen Harnwege sind hochgradig mikrosatelliteninstabil (MSI-high) bzw. Mismatch-Reparatur(MMR)-defizient. Letztere sind charakteristische urogenitale Tumoren im Rahmen des Lynch-Syndroms . In den USA ist durch die Food & Drug Administration (FDA) eine tumoragnostische Zulassung des Checkpointinhibitors Pembrolizumab für solide MSI-high oder MMR-defiziente Tumoren erfolgt . Daher empfiehlt die ISUP eine routinemäßige Immunhistochemie für MLH1, PMS2, MSH2 und MSH6 bei allen Urothelkarzinomen der oberen Harnwege. Da eine tumoragnostische Zulassung für Pembrolizumab durch die Europäische Arzneimittel-Agentur (EMA) nicht umgesetzt wurde, ist eine Testung hier ebenfalls nur nach interdisziplinärer Absprache zu empfehlen. TERT - und FGFR3 -Alterationen Basierend auf der Expression von „luminalen/urothelialen“ und „basalen/squamösen“ Markern, der inflammatorischen Aktivierung und der Aktivität des Zellzyklus wurden in einer Konsensusklassifikation kürzlich 6 molekulare Subtypen des Urothelkarzinoms („luminal-papillär“, „luminal nicht spezifiziert“, „luminal instabil“, „stromareich“, „basal-squamös“ und „neuroendokrin-ähnlich“) definiert . Der Großteil der muskelinvasiven Urothelkarzinome kann so entweder dem „luminalen“ oder dem „basal-squamösen“ Subtyp zugeordnet werden. Nur ein kleiner Teil wird aufgrund der Expression von neuroendokrinen Genen dem „neuroendokrin-ähnlichen“ Subtypen zugeordnet. „Luminal-papilläre“ Subtypen zeigen das längste (60 % nach 5 Jahren) und „neuroendokrin-ähnliche“ Subtypen das kürzeste Gesamtüberleben (15 % nach 5 Jahren) . Die Datenlage bezüglich einer prädiktiven Wertigkeit der molekularen Subtypen im Hinblick auf das Ansprechen auf eine neoadjuvante Chemotherapie ist jedoch kontrovers und wurde an anderer Stelle in dieser Zeitung ausführlich diskutiert . Zudem bestehen hohe technische Anforderungen bei Fehlen von etablierten immunhistochemischen Surrogatmarkerpanels, sodass die molekulare Subtypisierung noch keinen Eingang in die gegenwärtige Routinediagnostik gefunden hat. Mutationen im Promotor des TERT -Gens, welches für eine katalytische Untereinheit der Telomerase codiert, kommen in etwa 60–80 % aller Urothelkarzinome vor und stellen eine frühe genetische Aberration in der Karzinogenese dar . In benignen Läsionen wie reaktiven Urothelläsionen oder dem invertierten Papillom sind TERT -Mutationen nur sehr selten nachweisbar, so dass eine Mutationsanalyse in schwierigen Fällen hilfreich sein kann . Vor kurzem wurde in den USA der FGFR-Inhibitor Erdafitinib bei Patienten mit fortgeschrittenem/metastasiertem Urothelkarzinom zugelassen . Als Voraussetzung müssen spezifische FGFR3- oder FGFR2 -Alterationen nachgewiesen sein und ein Progress unter oder nach platinhaltiger Chemotherapie vorliegen. Parallel wurde eine Companion-diagnostic-Testung zur Detektion der FGFR3- oder FGFR2 -Alterationen zugelassen . FGFR3 -Mutationen oder Translokationen finden sich in etwa 15 % der muskelinvasiven Urothelkarzinome, deutlich häufiger sind sie jedoch in den nichtinvasiven papillären Karzinomen (75 %) . FGFR3 -Alterationen sind bei nichtmuskelinvasiven Urothelkarzinomen mit einer geringeren Progressionsrate zum muskelinvasiven Urothelkarzinom und einer besseren Prognose assoziiert . Es bestand Konsens, dass derzeit eine Integration des FGFR3 -Status in die Tumorgraduierung oder die klinische Entscheidungsfindung verfrüht wäre. Eine Zulassung von Erdafitinib in Deutschland besteht zum aktuellen Zeitpunkt nicht (Stand Oktober 2020). Während für die Detektion eines High-grade-Urothelkarzinoms und eines Carcinoma in situ in der Urinzytologie eine hohe Sensitivität besteht, schließt ein negativer Befund ein Low-grade-Urothelkarzinom nicht aus. Vorteile uringebundener Biomarker liegen in der Erhöhung der Sensitivität für die Detektion von High-grade- und Low-grade-Urothelkarzinomen . Erstere zeigen typische chromosomale Aberrationen wie Aneuploidien für die Chromosomen 3, 7, 17 sowie einen Verlust von 9p21, welche mittels FISH-Diagnostik nachgewiesen werden können (UroVysion-Test, Abbott Laboratories, Abbott Park, IL, USA) . Für den molekularen Nachweis von Low-grade-Urothelkarzinomen können häufige und typische Genmutationen wie TERT - oder FGFR3- Mutationen herangezogen werden . In der Nachsorge bestünde bei höherer Sensitivität der molekularpathologischen Untersuchungen das Szenario eines rein molekularen Rezidivs im Urin ohne Nachweis einer positiven Zytologie („antizipatorisch positiv“). Derzeit wird aber eine routinehafte Bestimmung uringebundener molekularer Biomarker in der Nachsorge nicht empfohlen. Es gibt auch Fortschritte auf dem Gebiet der sog. Liquid-Biopsy-Diagnostik beim Urothelkarzinom. Der Nachweis von zirkulierenden Tumorzellen (CTCs) im Blut von Patienten mit Urothelkarzinom korreliert mit einem aggressiveren Krankheitsverlauf . Dasselbe gilt für den Nachweis zirkulierender Tumor-DNA (ctcDNA) . Daneben könnte zukünftig eine FGFR -Mutationsanalyse als Voraussetzung für eine Therapie mit Erdafitinib an ctcDNA durchgeführt werden. Trotz dieser vielversprechenden Entwicklungen bleibt die Liquid-Biopsy-Diagnostik vorerst auf Studien begrenzt. Die Diagnose seltenerer histologischer Subtypen (mikropapilläres, plasmazytoides oder neuroendokrines Karzinom) ist relevant, da diese histologischen Varianten aggressiver sind, was ggf. auch ohne Nachweis einer Muskelinvasivität eine Frühzystektomie nahelegt. Bei der plasmazytoiden Variante kommt es häufig zu einem E‑Cadherin-Verlust, welcher meist durch Mutationen im CDH1 -Gen bedingt ist und das diskohäsive Wachstum dieser aggressiven Variante erklärt . Eine E‑Cadherin-Immunhistochemie ist zur Diagnose nicht erforderlich, kann in Einzelfällen aber zur Unterscheidung von artifiziellem diskohäsivem Wachstum hilfreich sein. Bei der mikropapillären Variante findet sich gehäuft (ca. 30 %) eine HER2 -Amplifikation mit HER2-Überexpression, welche ggf. neue Therapieoptionen eröffnet . Dennoch wird aufgrund der noch unzureichenden Datenlage vorerst keine routinehafte Testung empfohlen. Das kleinzellige Karzinom der Harnblase, welches grundsätzlich anders therapiert wird, sollte mittels immunhistochemischer neuroendokriner Marker bestätigt werden. Da auch konventionelle Urothelkarzinome einen neuroendokrinen Immunphänotyp zeigen können, das Ansprechen auf eine neoadjuvante Chemotherapie für diese Tumoren aber unklar ist, sollten nur bei konventionell-morphologischem Bild eines kleinzelligen Karzinoms die immunhistochemischen Zusatzuntersuchungen erfolgen . In etwa 20 % der Fälle findet sich beim lokal fortgeschrittenen und metastasierten Urothelkarzinom ein Therapieansprechen auf Checkpointinhibitoren, welche als Erst- und Zweitlinientherapie Einsatz finden . In Deutschland liegt eine Zulassung für Pembrolizumab und Atezolizumab in der Erstlinientherapie nur bei cisplatinungeeigneten Patienten vor, deren Tumorgewebe einen IC-Score von ≥5 % (Atezolizumab) oder einen CPS-Score von ≥10 (Pembrolizumab) aufweist. In der Zweitlinientherapie besteht zudem eine Zulassung für Nivolumab, wobei eine verpflichtende PD-L1-Immunhistochemie in der Zweitlinientherapie für alle 3 Therapeutika entfällt. Die ISUP empfiehlt eine routinehafte Testung des PD-L1-Status beim metastasierten Urothelkarzinom, übertragen auf Zulassungssituation in Deutschland ist entsprechend eine interdisziplinäre Absprache zu empfehlen. Weniger als 1 % der Urothelkarzinome der Harnblase, aber etwa 20 % der Urothelkarzinome der oberen Harnwege sind hochgradig mikrosatelliteninstabil (MSI-high) bzw. Mismatch-Reparatur(MMR)-defizient. Letztere sind charakteristische urogenitale Tumoren im Rahmen des Lynch-Syndroms . In den USA ist durch die Food & Drug Administration (FDA) eine tumoragnostische Zulassung des Checkpointinhibitors Pembrolizumab für solide MSI-high oder MMR-defiziente Tumoren erfolgt . Daher empfiehlt die ISUP eine routinemäßige Immunhistochemie für MLH1, PMS2, MSH2 und MSH6 bei allen Urothelkarzinomen der oberen Harnwege. Da eine tumoragnostische Zulassung für Pembrolizumab durch die Europäische Arzneimittel-Agentur (EMA) nicht umgesetzt wurde, ist eine Testung hier ebenfalls nur nach interdisziplinärer Absprache zu empfehlen. Stellenwert der p16-Immunhistochemie und der HPV-Testung beim Peniskarzinom und Vorläuferläsionen Die penile intraepitheliale Neoplasie (PeIN) und das invasive Peniskarzinom werden in der WHO-Klassifikation basierend auf ätiologischen und prognostischen Merkmalen in die HPV(humanes Papillomavirus)-assoziierten und in die nicht-HPV-assoziierten Neoplasien unterteilt. Konventionell-morphologisches Charakteristikum der HPV-Infektion ist eine basaloide, kondylomatöse oder undifferenzierte Morphologie, während nicht HPV-assoziierte Neoplasien morphologisch meist verhornende Low-grade-Tumoren sind. Nahezu beweisend für eine High-risk-HPV-Infektion ist eine positive p16-Immunhistochemie, welche daher einen sensitiven Surrogatmarker darstellt . Die morphologische Unterscheidung einer pleomorphen differenzierten PeIN von einer HPV-assoziierten PeIN mit hochgradiger intraepithelialer Neoplasie kann schwierig sein. Hier kann eine p16-Immunhistochemie zur Diagnose einer HPV-assoziierten PeIN mit hochgradiger intraepithelialer Neoplasie führen. Bei der Unterscheidung einer sog. hyperplasieartigen differenzierten PeIN von einer echten plattenepithelialen Hyperplasie kann eine Ki-67-Immunhistochemie helfen, da der Nachweis von suprabasaler Ki-67-positiver Zellen für eine hyperplasieartige differenzierte PeIN spricht. Neuere Studien weisen auf das erhöhte Risiko einer malignen Entartung einzelner Subtypen von penilen Kondylomen mit High-risk-HPV-Infektion hin . Daher wird empfohlen, eine p16-Immunhistochemie und/oder HPV-Typisierung von Kondylomen mit mäßig- und hochgradiger Atypie durchzuführen. Die penile intraepitheliale Neoplasie (PeIN) und das invasive Peniskarzinom werden in der WHO-Klassifikation basierend auf ätiologischen und prognostischen Merkmalen in die HPV(humanes Papillomavirus)-assoziierten und in die nicht-HPV-assoziierten Neoplasien unterteilt. Konventionell-morphologisches Charakteristikum der HPV-Infektion ist eine basaloide, kondylomatöse oder undifferenzierte Morphologie, während nicht HPV-assoziierte Neoplasien morphologisch meist verhornende Low-grade-Tumoren sind. Nahezu beweisend für eine High-risk-HPV-Infektion ist eine positive p16-Immunhistochemie, welche daher einen sensitiven Surrogatmarker darstellt . Die morphologische Unterscheidung einer pleomorphen differenzierten PeIN von einer HPV-assoziierten PeIN mit hochgradiger intraepithelialer Neoplasie kann schwierig sein. Hier kann eine p16-Immunhistochemie zur Diagnose einer HPV-assoziierten PeIN mit hochgradiger intraepithelialer Neoplasie führen. Bei der Unterscheidung einer sog. hyperplasieartigen differenzierten PeIN von einer echten plattenepithelialen Hyperplasie kann eine Ki-67-Immunhistochemie helfen, da der Nachweis von suprabasaler Ki-67-positiver Zellen für eine hyperplasieartige differenzierte PeIN spricht. Neuere Studien weisen auf das erhöhte Risiko einer malignen Entartung einzelner Subtypen von penilen Kondylomen mit High-risk-HPV-Infektion hin . Daher wird empfohlen, eine p16-Immunhistochemie und/oder HPV-Typisierung von Kondylomen mit mäßig- und hochgradiger Atypie durchzuführen. Testikuläre Keimzelltumoren („testicular germ cell tumors“ [TGCT]) werden nach der derzeitigen WHO-Klassifikation basierend auf einer Assoziation mit einer Keimzellneoplasie in situ („germ cell neoplasia in situ“ [GCNIS]) in 3 Unterkategorien unterteilt. Das Fehlen oder Vorhandensein einer GCNIS muss nach Empfehlung der ISUP im Befund vermerkt werden. Präpubertäre, nicht GCNIS-assoziierte Teratome (Typ-I-TGCT) sind benigne und treten hauptsächlich präpubertär, seltener auch postpubertär auf. Sie sind zytogenetisch diploid und weisen keine spezifischen genetischen Aberrationen auf . Der Nachweis einer Aneuploidie, typisch als Verlust von Chromosom 6q, stellt das molekulare Korrelat der Transition in einen malignen, meist indolenten Dottersacktumor dar, welcher zusätzlich zum konventionell-morphologischen Aspekt über die Positivität für AFP und Glypican‑3 identifiziert werden kann . Die Abgrenzung zu einem postpubertären, GCNIS-assoziierten malignen Teratom (Typ-II-TCGT) ist therapeutisch relevant und gelingt aufgrund identischer immunhistochemischer Profile nur über den Nachweis einer Oct3/4-positiven GCNIS im postpubertären, GCNIS-assoziierten Teratom. In schwierigen Fällen kann weiterhin der Nachweis einer Aneuploidie mit dem in GCNIS-assoziierten Teratomen vorkommenden Zugewinn an 12p zur Abgrenzung von einem präpubertären, nicht GCNIS-assoziierten Teratom genutzt werden . Bei adoleszenten oder adulten Patienten sollte der konventionell-morphologische Verdacht auf ein präpubertäres, nicht GCNIS-assoziiertes Teratom mittels FISH für das Chromosom 12 bestätigt werden. Der Nachweis eines Zugewinns an Chromosom 12p kann bei unklaren primären oder metastasierten Tumoren weiterhin für die Bestätigung einer Keimzelltumoridentität dienen. Ausführliche Empfehlungen zur immunhistochemischen Diagnostik bei TGCT sind verfügbar . Die Diagnose des spermatozytischen Tumors (Typ-III-TGCT) ist im Regelfall konventionell-morphologisch gut möglich. Bei diagnostischen Schwierigkeiten gelingt die Abgrenzung zum Seminom über eine negative Oct3/4-Immunhistochemie im spermatozytischen Tumor. Der im spermatozytischen Tumor regelhaft nachweisbare Zugewinn an Chromosom 9, welcher in Typ-II-TGCT nicht gefunden wird, hat in der Routinediagnostik keinen Stellenwert. Als Surrogatmarker für die letztgenannte Aberration kann jedoch eine Überexpression von DMRT1, welches auf Chromosom 9 lokalisiert ist, immunhistochemisch festgestellt werden . Als vielversprechender Kandidat für einen Liquid-Biopsy-basierten molekularen Biomarker in der Primär- oder Verlaufsdiagnostik konnte die miRNA miR-371a-3p identifiziert werden . miR-371a-3p wird von den malignen Komponenten aller TGCT, inklusive dem nicht GCNIS-assoziierten Dottersacktumor, überexprimiert und lässt sich, vorerst als experimenteller Marker, in Serum, Plasma und Liquor von Patienten nachweisen. Mit etwa 65–70 % stellt das klarzellige Nierenzellkarzinom (NZK) den häufigsten Subtypen aller NZK dar. Bekannte molekulare Veränderungen sind Mutationen oder Promotormethylierungen im VHL -Gen. Als „second hit“ findet sich typischerweise eine partielle oder komplette Deletion von Chromosom 3p . Obgleich die Diagnose eines klarzelligen NZK häufig gut konventionell-morphologisch möglich ist, kann als Surrogatmarker für Alterationen in der VHL-HIF-Achse eine Carboanhydrase-9(CAIX)-Immunhistochemie eingesetzt werden. Nur eine starke, durchgängige membranöse Färbereaktion (analog zu einem HER2-Score von 3+) sollte als positiv gewertet werden. Zudem sollte beachtet werden, dass sich eine Positivität für CAIX generell auch in hypoxischen Geweben findet und eine nur perinekrotische Positivität nicht beachtet werden sollte. Mutationen der ebenfalls auf Chromosom 3p lokalisierten Gene SETD2, BAP1 und PBRM1 sind offenbar auch mit dem biologischen Verhalten assoziiert, werden aber in der Routinediagnostik gegenwärtig nicht untersucht . Das papilläre NZK stellt mit etwa 15–19 % den zweithäufigsten Subtypen aller NZK dar. Immunhistochemisch zeigen die Tumoren in den meisten Fällen eine Positivität für Zytokeratin 7 und AMACR. Bei Vorliegen multipler oder familiär gehäufter papillärer NZK Typ 1 ist an ein hereditäres papilläres NZK-Syndrom mit Keimbahnmutation im MET -Gen zu denken . Das klarzellig-(tubulo-)papilläre NZK zeigt vornehmlich eine tubuläre, zystische und/oder papilläre Architektur mit klarzelligen Tumorzellen und uniformen, apikal ausgerichteten Zellkernen. Molekulare Charakteristika des klarzelligen oder des papillären NZK finden sich nicht, stattdessen sind Veränderungen der mitochondrialen DNA beschrieben . Die Abgrenzung zum klarzelligen NZK ist aufgrund des indolenten Verhaltens relevant und gelingt über ein charakteristisches Immunprofil (Zytokeratin 7, GATA3 und CAIX [basolateral] positiv; AMACR und CD10 negativ) . Die bisweilen schwierige Differenzialdiagnose eosinophiler bzw. onkozytärer Nierentumoren umfasst das renale Onkozytom (RO), das chromophobe Karzinom (ChRCC), die onkozytäre Variante des papillären NZK (OPRCC), die eosinophile Variante des klarzelligen NZK (CCRCC), den hybrid-onkozytisch-chromophoben Tumor (HOCT) sowie weitere, bisher unklassifizierte eosinophile Tumoren. Beim chromophoben Karzinom sind zytogenetische Veränderungen variabel und umfassen den Verlust an Chromosom Y, 1, 2, 6, 10, 13, 17 und 21 oder auch Gewinn an den Chromosomen 4, 7, 15, 19, und 20 . Die 3 häufigsten molekularen Muster für das renale Onkozytom sind ein wildtypischer Karyotyp, ein Verlust von Chromosom 1 oder Y sowie Rearrangements von 11q13, welche das Gen für Cyclin D1 beinhaltet . Weiterhin sind für das RO auch Verluste an Chromosom 1, X, Y, 14 oder 21 beschrieben. Die hybrid-onkozytisch-chromophoben Tumoren (HOCT) zeigen hierzu unterschiedliche genetische Profile . Die in der WHO-Klassifikation als MiT-Familie der Translokationsnierenzellkarzinome zusammengefassten Entitäten beinhalten NZK mit Translokationen der Mitglieder der Mikrophtalmia-Transkriptionsfaktor-Familie TFE3, TFEB und MiTF . Sie machen etwa 40 % der Nierenzellkarzinome bei pädiatrischen Patienten und bis zu 4 % der Nierenzellkarzinome im Erwachsenenalter aus. Neben den selteneren Translokationen von TFEB und MiTF , lokalisiert auf Chromsom 6 beziehungsweise Chromsom 3, stellen Translokationen von TFE3 , lokalisiert auf Chromsom Xp11.2, die größte Untergruppe dar. Als Surrogatmarker für die vorliegenden Translokationen kann eine TFE3- oder TFEB-Immunhistochemie genutzt werden. Weiterhin werden regelhaft melanozytäre Marker (vornehmlich beim TFEB-assoziierten NZK) und Kathepsin K exprimiert . Hilfreich zur Abgrenzung eines klarzelligen NZK ist die negative oder schwache Färbereaktion für CAIX und die Keratinarmut dieser Tumoren . An ein MiT-Translokations-Nierenzellkarzinom sollte bei einem NZK mit außergewöhnlicher Morphologie sowie bei jungen Patienten gedacht werden. Eine zusammenfassende Übersicht bezüglich klinischer, morphologischer, immunhistochemischer und molekularer Eigenschaften findet sich in der Originalarbeit (Tab. 2; ). Nach immunhistochemischem Nachweis von TFE3, TFEB und MiTF sollte bestätigend eine FISH-Untersuchung oder NGS-basierte Methode durchgeführt werden. Medulläres NZK Das nahezu ausschließlich bei Patienten mit einer Sichelzellanämie oder seltener anderen Hämoglobinopathien vorkommende medulläre NKZ ist ein aggressives High-grade-Adenokarzinom mit infiltrativem, glandulärem Wachstumsmuster und desmoplastischer Stromareaktion. Ein Verlust der INI-1-Proteinexpression ist diagnostisch . Medulläre Karzinome sollen, bei Fehlen einer Hämoglobinopathie, als unklassifiziertes Nierenzellkarzinom mit medullärem Phänotyp klassifiziert werden. Hereditäre NZK-Syndrome Beim autosomal-dominant vererbten Von-Hippel-Lindau(VHL)-Syndrom liegen Keimbahnmutationen im VHL -Gen vor, welche für klarzellige NZK sowie extrarenale Neoplasien wie Hämangioblastome des zentralen Nervensystems, Phäochromozytome, Zystadenome des Pankreas und des Nebenhodens, neuroendokrine Tumoren sowie Nierenzysten prädisponieren. Der Verdacht auf das Vorliegen eines VHL-Syndroms sollte bei Nachweis eines klarzelligen NZK bei einem Patienten unter 46 Jahre oder bei Nachweis mehrerer klarzelliger NZK kommuniziert werden . Konventionell-morphologisch zeigt sich als Besonderheit gegenüber den sporadischen klarzelligen NZK neben intratumoralen zystischen Tumoranteilen gelegentlich eine klarzellige papilläre Morphologie, welche aufgrund immunhistochemischer und zytogenetischer Merkmale jedoch als rein klarzelliges NZK klassifiziert werden sollte. Succinat-Dehydrogenase(SDH)-defiziente Nierenzellkarzinome sind sehr selten . Bei den meisten SDH-defizienten NZK liegen Loss-of-function-Keimbahnmutationen in der SDH-Untereinheit SDHB vor. SDH-defiziente NZK zeigen meist eine einheitliche Morphologie mit vakuoliertem, eosinophilem Zytoplasma mit Zytoplasmaeinschlüssen. Eine Positivität für Panzytokeratin fehlt in 25 % der Fälle. Als Surrogatmarker für die vorliegende SDH-Defizienz findet sich immunhistochemisch ein kompletter Expressionsverlust von SDHB. Hier gilt eine starke zytoplasmatische SDHB-Färbereaktion als erhaltene SDHB-Expression, angrenzende Tubuli sollten als interne Positivkontrolle verwendet werden. Nicht näher klassifizierbare eosinophile NZK, Zytokeratin-negative onkozytäre Tumoren und auch sarkomatoide Tumoren sollten weiter untersucht werden. Klinische Kriterien sind ein junges Patientenalter, multifokales Auftreten, eine positive Familienanamnese sowie das Vorliegen potenziell SDH-defizienter Neoplasien wie Paragangliome/Phäochromozytome, gastrointestinale Stromatumoren und Hypophysenadenome. Beim autosomal-dominant vererbten Hereditäre-Leiomyomatose-und-Nierenzellkarzinom(HLRCC)-Syndrom liegen Keimbahnmutationen im Fumarathydratase-Gen vor . Neben dem häufigen Auftreten von zahlreichen kutanen und uterinen Leiomyomen finden sich mit geringerer Penetranz NZK mit morphologischer Ähnlichkeit zu papillären NZK Typ II und insbesondere auffällig prominenten Nukleolen und perinukleolären Halos, welche ein wichtiges konventionell-morphologisches Kriterium darstellen. Als Surrogatmarker für die vorliegende FH-Defizienz findet sich immunhistochemisch ein kompletter Expressionsverlust der FH. Dieser ist jedoch nur in 80–90 % der FH-defizienten NZK nachweisbar, sodass bei begründetem Verdacht trotz positiver Färbereaktion für FH eine molekularpathologische Testung angeschlossen werden sollte. Aufkommende und provisorische Tumorentitäten Bereits in der Vancouver-Klassifikation von 2012 waren neue, molekular definierte Nierenzelltumoren als aufkommende Tumorentitäten klassifiziert worden. Hierzu zählt das ALK -Translokations-NZK, bei welchem eine Fusion des anaplastischen Lymphomkinase( ALK )-Gens mit diversen Fusionspartnern zugrunde liegt . Konventionell-morphologisch sind variable, meist cribriforme oder papilläre Wachstumsmuster beschrieben worden. Klinische Fallberichte zeigen ein Therapieansprechen auf den ALK-Inhibitor Alectinib . Bei Verdacht auf ein ALK -Tanslokations-NZK empfiehlt sich die ALK-Immunhistochemie als Screeninguntersuchung, beweisend können eine ALK -FISH oder NGS-basierte Methoden eingesetzt werden. Das nahezu ausschließlich bei Patienten mit einer Sichelzellanämie oder seltener anderen Hämoglobinopathien vorkommende medulläre NKZ ist ein aggressives High-grade-Adenokarzinom mit infiltrativem, glandulärem Wachstumsmuster und desmoplastischer Stromareaktion. Ein Verlust der INI-1-Proteinexpression ist diagnostisch . Medulläre Karzinome sollen, bei Fehlen einer Hämoglobinopathie, als unklassifiziertes Nierenzellkarzinom mit medullärem Phänotyp klassifiziert werden. Beim autosomal-dominant vererbten Von-Hippel-Lindau(VHL)-Syndrom liegen Keimbahnmutationen im VHL -Gen vor, welche für klarzellige NZK sowie extrarenale Neoplasien wie Hämangioblastome des zentralen Nervensystems, Phäochromozytome, Zystadenome des Pankreas und des Nebenhodens, neuroendokrine Tumoren sowie Nierenzysten prädisponieren. Der Verdacht auf das Vorliegen eines VHL-Syndroms sollte bei Nachweis eines klarzelligen NZK bei einem Patienten unter 46 Jahre oder bei Nachweis mehrerer klarzelliger NZK kommuniziert werden . Konventionell-morphologisch zeigt sich als Besonderheit gegenüber den sporadischen klarzelligen NZK neben intratumoralen zystischen Tumoranteilen gelegentlich eine klarzellige papilläre Morphologie, welche aufgrund immunhistochemischer und zytogenetischer Merkmale jedoch als rein klarzelliges NZK klassifiziert werden sollte. Succinat-Dehydrogenase(SDH)-defiziente Nierenzellkarzinome sind sehr selten . Bei den meisten SDH-defizienten NZK liegen Loss-of-function-Keimbahnmutationen in der SDH-Untereinheit SDHB vor. SDH-defiziente NZK zeigen meist eine einheitliche Morphologie mit vakuoliertem, eosinophilem Zytoplasma mit Zytoplasmaeinschlüssen. Eine Positivität für Panzytokeratin fehlt in 25 % der Fälle. Als Surrogatmarker für die vorliegende SDH-Defizienz findet sich immunhistochemisch ein kompletter Expressionsverlust von SDHB. Hier gilt eine starke zytoplasmatische SDHB-Färbereaktion als erhaltene SDHB-Expression, angrenzende Tubuli sollten als interne Positivkontrolle verwendet werden. Nicht näher klassifizierbare eosinophile NZK, Zytokeratin-negative onkozytäre Tumoren und auch sarkomatoide Tumoren sollten weiter untersucht werden. Klinische Kriterien sind ein junges Patientenalter, multifokales Auftreten, eine positive Familienanamnese sowie das Vorliegen potenziell SDH-defizienter Neoplasien wie Paragangliome/Phäochromozytome, gastrointestinale Stromatumoren und Hypophysenadenome. Beim autosomal-dominant vererbten Hereditäre-Leiomyomatose-und-Nierenzellkarzinom(HLRCC)-Syndrom liegen Keimbahnmutationen im Fumarathydratase-Gen vor . Neben dem häufigen Auftreten von zahlreichen kutanen und uterinen Leiomyomen finden sich mit geringerer Penetranz NZK mit morphologischer Ähnlichkeit zu papillären NZK Typ II und insbesondere auffällig prominenten Nukleolen und perinukleolären Halos, welche ein wichtiges konventionell-morphologisches Kriterium darstellen. Als Surrogatmarker für die vorliegende FH-Defizienz findet sich immunhistochemisch ein kompletter Expressionsverlust der FH. Dieser ist jedoch nur in 80–90 % der FH-defizienten NZK nachweisbar, sodass bei begründetem Verdacht trotz positiver Färbereaktion für FH eine molekularpathologische Testung angeschlossen werden sollte. Bereits in der Vancouver-Klassifikation von 2012 waren neue, molekular definierte Nierenzelltumoren als aufkommende Tumorentitäten klassifiziert worden. Hierzu zählt das ALK -Translokations-NZK, bei welchem eine Fusion des anaplastischen Lymphomkinase( ALK )-Gens mit diversen Fusionspartnern zugrunde liegt . Konventionell-morphologisch sind variable, meist cribriforme oder papilläre Wachstumsmuster beschrieben worden. Klinische Fallberichte zeigen ein Therapieansprechen auf den ALK-Inhibitor Alectinib . Bei Verdacht auf ein ALK -Tanslokations-NZK empfiehlt sich die ALK-Immunhistochemie als Screeninguntersuchung, beweisend können eine ALK -FISH oder NGS-basierte Methoden eingesetzt werden. Mit dem stetig verbesserten Verständnis der molekularen Pathogenese des Prostatakarzinoms sind neue molekulare Biomarker mit prognostischer und therapieprädiktiver Wertigkeit entwickelt worden, welche bezüglich vorliegender Evidenz und einer möglichen Implementierung in die Risikostratifizierung auf der Konferenz diskutiert wurden. Prognostische Biomarker Der Nutzen von prognostischen Biomarkern beim Prostatakarzinom liegt in der korrekten Differenzierung von letalen, heilbaren und insignifikanten Tumoren als Voraussetzung für die Erstellung einer individualisierten Therapie, idealerweise zum Zeitpunkt der initialen diagnostischen Biopsie. Insbesondere beim Therapiemanagement von Patienten mit niedrigem oder intermediärem Risiko besteht häufig Uneinigkeit, sodass neue molekulare Biomarker ebendort hilfreich wären. Der Proliferationsmarker Ki-67 wird in Form des Ki-67-Proliferationsindex als diagnostischer und prognostischer Biomarker in verschiedenen Tumorentitäten bestimmt. Eine aktuelle Metaanalyse über 21 Studien mit insgesamt 5419 Patienten mit nichtmetastasiertem Prostatakarzinom belegte eindrucksvoll den Prognosewert des Ki-67-Proliferationsindex bezüglich des krebsspezifischen, des metastasenfreien Überlebens sowie des Gesamtüberlebens . Mehrere Studien konnten zudem eine prognostische Wertigkeit des Ki-67-Proliferationsindex auch an Prostatanadelbiopsien bestätigen . Nachteile des Ki-67-Proliferationsindex sind die hohe Interobservervariabilität und die Variabilität der Auswertungsmethode. Zudem sind zur Festlegung der Schwellenwerte für die Einstufung in eine Low-risk- oder High-risk-Kategorie weitere Studien notwendig. Der Tumorsuppressor PTEN reguliert den onkogenen AKT-mTOR-Signalweg. Aberrationen des PTEN -Gens finden sich beim Prostatakarzinom in etwa 20 % der nichtmetastasierten sowie in 40 % der metastasierten Fälle . Der PTEN -Status zeigt prognostische Wertigkeit für ein biochemisches Rezidiv und einen letalen Verlauf nach radikaler Prostatektomie . Der Nachweis eines PTEN -Verlustes im Biopsiematerial erhöhte das Risiko für eine Aufgraduierung am Prostatektomiepräparat , das frühere Auftreten eines kastrationsresistenten Prostatakarzinoms (CRPC), einer Metastasierung sowie einen tumorspezifischen Tod . In sog. Active-surveillance-Kohorten war das Risiko eines Upgradings in der Rebiopsie bei PTEN -Verlust 2,6fach erhöht . Eine prädiktive Wertigkeit für den PTEN -Status im Hinblick auf das radiografische progressionsfreie Überleben konnte in der aktuellen Phase-III-Studie IPATential150 gezeigt werden, in welcher der AKT-Inhibitor Ipatasertib in Kombination mit Abirateron und Prednisolon beim metastasierten CRPC (mCRPC) untersucht wird . Zusammenfassend wurden der Ki-67-Proliferationsindex und der PTEN -Status als potenziell nützliche prognostische Biomarker bei der Evaluation einer aktiven Überwachung bei Patienten mit einem Prostatakarzinom mit ISUP-Graduierung 1 (und/oder ISUP-Graduierung 2) bewertet. Es bestand jedoch Konsens darüber, dass vor einer Empfehlung zur routinemäßigen Anwendung prospektive Studien zur Validierung und zum Vergleich mit alternativen Biomarkern notwendig sind. RNA-basierte Biomarker Genexpressionssignaturen stellen prognostische und prädiktive Biomarker beim lokal begrenzten Prostatakarzinom dar. Methodisch liegt ihnen eine Quantifizierung von mRNA mittels RT-PCR (Prolaris, Myriad Genetic Laboratories, Inc., Salt Lake City, UT, USA; OncotypeDx, Genomic Health, Inc., Redwood City, CA, USA) oder Mikroarray (Decipher, GenomeDX Biosciences, San Diego, CA, USA) aus FFPE-Gewebe zugrunde. Multiple Studien konnten für alle 3 Assays eine prognostische Wertigkeit belegen . Abschließend bewertet die Arbeitsgruppe den gezielten Einsatz RNA-basierter Assays zur Abschätzung des Progressionsrisikos während einer aktiven Überwachung und nach radikaler Prostatektomie als prinzipiell sinnvoll. Bevor aber eine routinehafte Nutzung dieser kostenintensiven Assays empfohlen werden kann, sind jedoch weitere prospektive Validierungsstudien an Active-surveillance-Kohorten notwendig, in welchen diese auch mit etablierten und aktuell aufkommenden Biomarkern (wie z. B. Ki-67 oder PTEN) verglichen werden sollten. Prädiktive Biomarker: DNA-Reparatur-Defizienzen und Androgenrezeptoralterationen Analog zu anderen Tumorentitäten konnten Defizienzen der homologen Rekombinationsreparatur (HRR) und der Mismatch-Reparatur (MMR) auch beim Prostatakarzinom als Prädiktoren für ein Therapieansprechen auf eine Chemo- oder Immuntherapie identifiziert werden. In etwa 20 % aller fortgeschrittenen kastrationsresistenten Prostatakarzinome (CRPC) lassen sich Veränderungen in HRR-assoziierten Genen wie BRCA1/2 und ATM nachweisen, von welchen etwa die Hälfte Keimbahnmutationen sind . Eine signifikante Häufung von somatischen HRR-Mutationen bei metastasierten Prostatakarzinom (mCRPC) im Vergleich zum primären Prostatakarzinom lässt auf eine aggressivere Tumorbiologie der HRR-defizienten Malignome schließen . Dies unterstützend sind HRR-Keimbahnmutationen mit aggressiven histologischen Varianten (duktales Adenokarzinom, intraduktales Karzinom des Prostata [IDC-P], Gleasonmuster 5) und letalen Krankheitsverläufen assoziiert . In 2 retrospektive Studien werden Defizienzen der HRR als Prädiktor für das Ansprechen auf eine platinbasierte Chemotherapie beschrieben . Im Mai 2020 wurde in den USA und jetzt im November auch in Europa der PARP-Inhibitor Olaparib bei Patienten mit HRR-mutiertem mCRPC und Krankheitsprogress unter Abirateron und Enzalutamid zugelassen . Mutationen in Genen der MMR-Proteine finden sich in bis zu 10 % aller CRPC und weniger als 3 % der primären Prostatakarzinome. Wie auch HRR-Mutationen sind MMR-Mutationen mit aggressiven histologischen Varianten (duktales Adenokarzinom, Gleasonmuster 5) assoziiert . Im Gegensatz zu HRR-Mutationen sind nur etwa 20 % der MMR-Mutationen Keimbahnmutationen. Studien weisen auf ein Therapieansprechen von MMR-defizienten CRPC mit Checkpointinhibitoren hin . Zusammenfassend wird empfohlen, dass allen Patienten mit lokalisiertem Prostatakarzinom mit ISUP-Graduierung ≥4, lokalisiertem Prostatakarzinom aller ISUP-Graduierungen und einem PSA ≥20 ng/ml, oder metastasiertem Prostatakarzinom eine HRR- und MMR-Mutationsanalyse der Keimbahn angeboten werden sollte, wenn dies klinisch angezeigt ist. Eine HRR- und MMR-Mutationsanalyse an Tumorgewebe, präferenziell an Metastasengewebe, sollte allen Patienten mit metastasiertem Prostatakarzinom angeboten werden. Die Beurteilung einer MMR-Defizienz sollte eine Immunhistochemie für MLH1, PMS2, MSH2 und MSH6 mit oder ohne Analyse des MSI-Status und/oder Sequenzierung der MMR-Gene umfassen. Zur Beurteilung einer HRR-Defizienz sollte eine Sequenzierung zumindest von BRCA1/2 mit Möglichkeit der Detektion vom Amplifikationen erfolgen. Androgenrezeptoralterationen Genetische Aberrationen des Androgensrezeptors (AR) wie Punktmutationen, Amplifikationen des AR -Gens, AR -Splicevarianten (vor allem die ARv7 -Splicevariante) und Amplifikationen von AR -Enhancer-Elementen führen zu konstitutiver Aktivierung des AR-Signalwegs unter Androgenablation und stellen das molekularpathologische Korrelat zur Kastrationsresistenz beim CRPC dar . Neuere Wirkstoffe mit unterschiedlichen therapeutischen Ansatzpunkten wie der Reduktion der Androgenproduktion (Abirateron) oder der direkten Androgenrezeptorinhibition (Enzalutamid) stehen als therapeutische Optionen einer Taxan-basierten Chemotherapie gegenüber. Als potenzielle therapieprädiktive Biomarker sind die ARv7 -Splicevariante in zirkulierenden Tumorzellen (CTCs) und AR -Amplifikationen in zellfreier DNA (cfDNA) untersucht. In einer retrospektiven Studie war der Nachweis der ARv7 -Splicevariante in CTCs mit einer Therapieresistenz gegen AR-Signalweg-Inhibitoren assoziiert . Zudem könnte der Nachweis von AR -Amplifikationen aus cfDNA als Prädiktor für das Ansprechen auf AR-Signalweg-Inhibitoren nützlich sein . Bei Fehlen von prospektiven randomisierten Studien wird eine routinehafte Testung beim mCRPC gegenwärtig jedoch nicht empfohlen. Gewebebasierten Biomarkern kommen wegen nur schwacher prognostischer und fehlender prädiktiver Wertigkeit derzeit keine Bedeutung zu. Diagnostische Biomarker: neuroendokrines Prostatakarzinom (NEPC) Die Abgrenzung primärer kleinzelliger neuroendokriner Prostatakarzinome (NEPC) und therapieassoziierter neuroendokriner Prostatakarzinome (t-NEPC) von Prostatakarzinomen mit fokaler neuroendokriner Differenzierung oder Karzinoiden ist wichtig und bisweilen schwierig. Hier gilt, dass eine kleinzellige Morphologie zur Diagnose eines NEPC erforderlich ist, da neuroendokrine Marker nicht spezifisch für das kleinzellige NEPC sind und fokal auch in konventionellen Adenokarzinomen gesehen werden. Genetische Aberration wie RB - oder p53 -Inaktivierungen finden sich zwar gehäuft in NEPC, aber ebenfalls in konventionellen Adenokarzinomen, vor allem den CRPC . Genomische Studien an CRPC zeigen eine Assoziation zwischen neuroendokriner Morphologie und neuroendokrinen Transkriptionssignaturen, diese ist allerdings nicht in allen Fällen gegeben . Da eine fokale neuroendokrine Differenzierung mit höheren Gleason-Scores des gewöhnlichen Adenokarzinoms der Prostata assoziiert ist, ist bei diesen keine routinemäßige immunhistochemische Untersuchung neuroendokriner Marker empfohlen. Robuste therapieprädiktive Biomarker für das Ansprechen auf AR-Signalweg-Inhibitoren liegen für das fortgeschrittene CRPC nicht vor, zukünftig könnte hierzu eine Kombination molekularer und konventionell-morphologischer Merkmale herangezogen werden. Der Nutzen von prognostischen Biomarkern beim Prostatakarzinom liegt in der korrekten Differenzierung von letalen, heilbaren und insignifikanten Tumoren als Voraussetzung für die Erstellung einer individualisierten Therapie, idealerweise zum Zeitpunkt der initialen diagnostischen Biopsie. Insbesondere beim Therapiemanagement von Patienten mit niedrigem oder intermediärem Risiko besteht häufig Uneinigkeit, sodass neue molekulare Biomarker ebendort hilfreich wären. Der Proliferationsmarker Ki-67 wird in Form des Ki-67-Proliferationsindex als diagnostischer und prognostischer Biomarker in verschiedenen Tumorentitäten bestimmt. Eine aktuelle Metaanalyse über 21 Studien mit insgesamt 5419 Patienten mit nichtmetastasiertem Prostatakarzinom belegte eindrucksvoll den Prognosewert des Ki-67-Proliferationsindex bezüglich des krebsspezifischen, des metastasenfreien Überlebens sowie des Gesamtüberlebens . Mehrere Studien konnten zudem eine prognostische Wertigkeit des Ki-67-Proliferationsindex auch an Prostatanadelbiopsien bestätigen . Nachteile des Ki-67-Proliferationsindex sind die hohe Interobservervariabilität und die Variabilität der Auswertungsmethode. Zudem sind zur Festlegung der Schwellenwerte für die Einstufung in eine Low-risk- oder High-risk-Kategorie weitere Studien notwendig. Der Tumorsuppressor PTEN reguliert den onkogenen AKT-mTOR-Signalweg. Aberrationen des PTEN -Gens finden sich beim Prostatakarzinom in etwa 20 % der nichtmetastasierten sowie in 40 % der metastasierten Fälle . Der PTEN -Status zeigt prognostische Wertigkeit für ein biochemisches Rezidiv und einen letalen Verlauf nach radikaler Prostatektomie . Der Nachweis eines PTEN -Verlustes im Biopsiematerial erhöhte das Risiko für eine Aufgraduierung am Prostatektomiepräparat , das frühere Auftreten eines kastrationsresistenten Prostatakarzinoms (CRPC), einer Metastasierung sowie einen tumorspezifischen Tod . In sog. Active-surveillance-Kohorten war das Risiko eines Upgradings in der Rebiopsie bei PTEN -Verlust 2,6fach erhöht . Eine prädiktive Wertigkeit für den PTEN -Status im Hinblick auf das radiografische progressionsfreie Überleben konnte in der aktuellen Phase-III-Studie IPATential150 gezeigt werden, in welcher der AKT-Inhibitor Ipatasertib in Kombination mit Abirateron und Prednisolon beim metastasierten CRPC (mCRPC) untersucht wird . Zusammenfassend wurden der Ki-67-Proliferationsindex und der PTEN -Status als potenziell nützliche prognostische Biomarker bei der Evaluation einer aktiven Überwachung bei Patienten mit einem Prostatakarzinom mit ISUP-Graduierung 1 (und/oder ISUP-Graduierung 2) bewertet. Es bestand jedoch Konsens darüber, dass vor einer Empfehlung zur routinemäßigen Anwendung prospektive Studien zur Validierung und zum Vergleich mit alternativen Biomarkern notwendig sind. RNA-basierte Biomarker Genexpressionssignaturen stellen prognostische und prädiktive Biomarker beim lokal begrenzten Prostatakarzinom dar. Methodisch liegt ihnen eine Quantifizierung von mRNA mittels RT-PCR (Prolaris, Myriad Genetic Laboratories, Inc., Salt Lake City, UT, USA; OncotypeDx, Genomic Health, Inc., Redwood City, CA, USA) oder Mikroarray (Decipher, GenomeDX Biosciences, San Diego, CA, USA) aus FFPE-Gewebe zugrunde. Multiple Studien konnten für alle 3 Assays eine prognostische Wertigkeit belegen . Abschließend bewertet die Arbeitsgruppe den gezielten Einsatz RNA-basierter Assays zur Abschätzung des Progressionsrisikos während einer aktiven Überwachung und nach radikaler Prostatektomie als prinzipiell sinnvoll. Bevor aber eine routinehafte Nutzung dieser kostenintensiven Assays empfohlen werden kann, sind jedoch weitere prospektive Validierungsstudien an Active-surveillance-Kohorten notwendig, in welchen diese auch mit etablierten und aktuell aufkommenden Biomarkern (wie z. B. Ki-67 oder PTEN) verglichen werden sollten. Genexpressionssignaturen stellen prognostische und prädiktive Biomarker beim lokal begrenzten Prostatakarzinom dar. Methodisch liegt ihnen eine Quantifizierung von mRNA mittels RT-PCR (Prolaris, Myriad Genetic Laboratories, Inc., Salt Lake City, UT, USA; OncotypeDx, Genomic Health, Inc., Redwood City, CA, USA) oder Mikroarray (Decipher, GenomeDX Biosciences, San Diego, CA, USA) aus FFPE-Gewebe zugrunde. Multiple Studien konnten für alle 3 Assays eine prognostische Wertigkeit belegen . Abschließend bewertet die Arbeitsgruppe den gezielten Einsatz RNA-basierter Assays zur Abschätzung des Progressionsrisikos während einer aktiven Überwachung und nach radikaler Prostatektomie als prinzipiell sinnvoll. Bevor aber eine routinehafte Nutzung dieser kostenintensiven Assays empfohlen werden kann, sind jedoch weitere prospektive Validierungsstudien an Active-surveillance-Kohorten notwendig, in welchen diese auch mit etablierten und aktuell aufkommenden Biomarkern (wie z. B. Ki-67 oder PTEN) verglichen werden sollten. Analog zu anderen Tumorentitäten konnten Defizienzen der homologen Rekombinationsreparatur (HRR) und der Mismatch-Reparatur (MMR) auch beim Prostatakarzinom als Prädiktoren für ein Therapieansprechen auf eine Chemo- oder Immuntherapie identifiziert werden. In etwa 20 % aller fortgeschrittenen kastrationsresistenten Prostatakarzinome (CRPC) lassen sich Veränderungen in HRR-assoziierten Genen wie BRCA1/2 und ATM nachweisen, von welchen etwa die Hälfte Keimbahnmutationen sind . Eine signifikante Häufung von somatischen HRR-Mutationen bei metastasierten Prostatakarzinom (mCRPC) im Vergleich zum primären Prostatakarzinom lässt auf eine aggressivere Tumorbiologie der HRR-defizienten Malignome schließen . Dies unterstützend sind HRR-Keimbahnmutationen mit aggressiven histologischen Varianten (duktales Adenokarzinom, intraduktales Karzinom des Prostata [IDC-P], Gleasonmuster 5) und letalen Krankheitsverläufen assoziiert . In 2 retrospektive Studien werden Defizienzen der HRR als Prädiktor für das Ansprechen auf eine platinbasierte Chemotherapie beschrieben . Im Mai 2020 wurde in den USA und jetzt im November auch in Europa der PARP-Inhibitor Olaparib bei Patienten mit HRR-mutiertem mCRPC und Krankheitsprogress unter Abirateron und Enzalutamid zugelassen . Mutationen in Genen der MMR-Proteine finden sich in bis zu 10 % aller CRPC und weniger als 3 % der primären Prostatakarzinome. Wie auch HRR-Mutationen sind MMR-Mutationen mit aggressiven histologischen Varianten (duktales Adenokarzinom, Gleasonmuster 5) assoziiert . Im Gegensatz zu HRR-Mutationen sind nur etwa 20 % der MMR-Mutationen Keimbahnmutationen. Studien weisen auf ein Therapieansprechen von MMR-defizienten CRPC mit Checkpointinhibitoren hin . Zusammenfassend wird empfohlen, dass allen Patienten mit lokalisiertem Prostatakarzinom mit ISUP-Graduierung ≥4, lokalisiertem Prostatakarzinom aller ISUP-Graduierungen und einem PSA ≥20 ng/ml, oder metastasiertem Prostatakarzinom eine HRR- und MMR-Mutationsanalyse der Keimbahn angeboten werden sollte, wenn dies klinisch angezeigt ist. Eine HRR- und MMR-Mutationsanalyse an Tumorgewebe, präferenziell an Metastasengewebe, sollte allen Patienten mit metastasiertem Prostatakarzinom angeboten werden. Die Beurteilung einer MMR-Defizienz sollte eine Immunhistochemie für MLH1, PMS2, MSH2 und MSH6 mit oder ohne Analyse des MSI-Status und/oder Sequenzierung der MMR-Gene umfassen. Zur Beurteilung einer HRR-Defizienz sollte eine Sequenzierung zumindest von BRCA1/2 mit Möglichkeit der Detektion vom Amplifikationen erfolgen. Genetische Aberrationen des Androgensrezeptors (AR) wie Punktmutationen, Amplifikationen des AR -Gens, AR -Splicevarianten (vor allem die ARv7 -Splicevariante) und Amplifikationen von AR -Enhancer-Elementen führen zu konstitutiver Aktivierung des AR-Signalwegs unter Androgenablation und stellen das molekularpathologische Korrelat zur Kastrationsresistenz beim CRPC dar . Neuere Wirkstoffe mit unterschiedlichen therapeutischen Ansatzpunkten wie der Reduktion der Androgenproduktion (Abirateron) oder der direkten Androgenrezeptorinhibition (Enzalutamid) stehen als therapeutische Optionen einer Taxan-basierten Chemotherapie gegenüber. Als potenzielle therapieprädiktive Biomarker sind die ARv7 -Splicevariante in zirkulierenden Tumorzellen (CTCs) und AR -Amplifikationen in zellfreier DNA (cfDNA) untersucht. In einer retrospektiven Studie war der Nachweis der ARv7 -Splicevariante in CTCs mit einer Therapieresistenz gegen AR-Signalweg-Inhibitoren assoziiert . Zudem könnte der Nachweis von AR -Amplifikationen aus cfDNA als Prädiktor für das Ansprechen auf AR-Signalweg-Inhibitoren nützlich sein . Bei Fehlen von prospektiven randomisierten Studien wird eine routinehafte Testung beim mCRPC gegenwärtig jedoch nicht empfohlen. Gewebebasierten Biomarkern kommen wegen nur schwacher prognostischer und fehlender prädiktiver Wertigkeit derzeit keine Bedeutung zu. Die Abgrenzung primärer kleinzelliger neuroendokriner Prostatakarzinome (NEPC) und therapieassoziierter neuroendokriner Prostatakarzinome (t-NEPC) von Prostatakarzinomen mit fokaler neuroendokriner Differenzierung oder Karzinoiden ist wichtig und bisweilen schwierig. Hier gilt, dass eine kleinzellige Morphologie zur Diagnose eines NEPC erforderlich ist, da neuroendokrine Marker nicht spezifisch für das kleinzellige NEPC sind und fokal auch in konventionellen Adenokarzinomen gesehen werden. Genetische Aberration wie RB - oder p53 -Inaktivierungen finden sich zwar gehäuft in NEPC, aber ebenfalls in konventionellen Adenokarzinomen, vor allem den CRPC . Genomische Studien an CRPC zeigen eine Assoziation zwischen neuroendokriner Morphologie und neuroendokrinen Transkriptionssignaturen, diese ist allerdings nicht in allen Fällen gegeben . Da eine fokale neuroendokrine Differenzierung mit höheren Gleason-Scores des gewöhnlichen Adenokarzinoms der Prostata assoziiert ist, ist bei diesen keine routinemäßige immunhistochemische Untersuchung neuroendokriner Marker empfohlen. Robuste therapieprädiktive Biomarker für das Ansprechen auf AR-Signalweg-Inhibitoren liegen für das fortgeschrittene CRPC nicht vor, zukünftig könnte hierzu eine Kombination molekularer und konventionell-morphologischer Merkmale herangezogen werden. |
Individual mHLA-DR trajectories in the ICU as predictors of early infections following liver transplantation: a prospective observational study | e00f3293-81d1-42f6-8dc8-e04650bb96bd | 11834174 | Surgical Procedures, Operative[mh] | Liver transplantation (LT) is a cornerstone treatment for patients with end-stage liver diseases, offering significant improvements in survival and quality of life. Over the past three decades, survival rates after LT have markedly increased. However, infections continue to be a major complication in the posttransplant period and remain the leading cause of early mortality and morbidity despite advances in surgical techniques, immunosuppressive drugs and infection control strategies . Notably, infections are critical issues within the first 3 months after LT, accounting for 33–51% of deaths according to pre-LT disease severity . Consequently, effective monitoring and assessment of the risk of infectious complications remains a critical challenge for providing timely and individualized patient care. Several risk factors for infections following LT have been identified and are related to pre-LT conditions and surgical complications . However, to date, no data support immune monitoring to assess the risk of post-LT infections. The post-LT prognosis is impaired for patients with pre-LT severe liver disease, especially patients who present with acute-on-chronic liver failure (ACLF) . These patients already present with marked immune alterations known as cirrhosis-associated immune dysfunction (CAID) before LT. Like sepsis, ACLF is characterized by both systemic inflammation and profound immunosuppression, likely as a consequence of alterations in the gut‒liver axis, leading to intestinal hyperpermeability and dysbiosis . This results in continuous immune stimulation by microbial antigens, ultimately causing immune cell exhaustion . Consequently, both innate (e.g., increased numbers of immature neutrophils, low expression of HLA-DR on monocytes, altered monocyte release of inflammatory cytokines) and adaptive (e.g., lymphopenia, increased expression of checkpoint inhibitors, altered IFN-γ lymphocyte release) immune responses are impaired in ACLF patients, dramatically increasing their susceptibility to infections . While ACLF patients face greater perioperative risks and posttransplant complications, with infections being the predominant cause of death within one year post-LT, the potential impact of immune status before LT on post-LT outcomes (infections, graft rejection, mortality) has yet to be fully explored. This underscores the need for comprehensive studies that include both pre-LT and post-LT assessments to better delineate individualized post-LT management strategies. In the present work, we leveraged standardized cellular immunology parameters, which are now commonly used in intensive care unit (ICU) patients, to monitor the occurrence of immunosuppression following injuries (sepsis, trauma, surgery) and its association with infections . In a prospective observational monocentric study, we measured monocytic HLA-DR (mHLA-DR) expression, T lymphocyte subsets, and ex vivo IFN-γ release following non-antigen-specific stimulation before and over one month after LT in a cohort of patients receiving the same immunosuppression protocol. The primary objective was to assess whether any immunological parameters are associated with clinical outcomes, such as infections, graft rejection and one year mortality. Patients We conducted an observational, prospective and longitudinal study to assess the kinetics of immune parameters following LT. We consecutively enrolled patients from February 2020 to May 2023 in the EdMonHG study (monocytic expression of HLA-DR after liver transplantation, ClinicalTrials.gov identifier NCT03995537) at Lyon University Hospital ( Hospices Civils de Lyon ). The study was conducted in accordance with the Helsinki Declaration and approved by the Comité de Protection des Personnes Ile de France XI (approval number 19039–40433). Written informed consent was obtained from all participants prior to enrollment. The inclusion criteria included patients awaiting LT, with acute liver failure (ALF), compensated cirrhosis (compensated advanced chronic liver disease [cACLD] with hepatocellular carcinoma [HCC]), or decompensation of cirrhosis, with or without organ failure (decompensated advanced chronic liver disease [dALCD] including nonacute decompensation [N-AD], acute decompensation [AD] and ACLF). Patients receiving immunosuppressive therapy before LT (with the exception of corticosteroids) and patients without underlying liver disease were excluded. Patients awaiting multiorgan transplantation or retransplantation were also not eligible for the study. Outcome Postoperative infections were defined according to the American Society of Transplantation (Supplementary Data), and only significant infections were recorded (excluding uncomplicated cystitis and catheter colonization). The diagnosis of acute graft rejection was based on the presence of liver enzyme disturbances and histological criteria, according to the Banff schema for grading liver allograft rejection: an international consensus document, with a Banff score ≥ 4 . Postoperative complications and infections were analysed if they occurred within 1 month post-LT, and survival status was assessed at 1 year post-LT. Finally, patients finished the study 1 year after inclusion if no liver transplantation occurred at that time. LT management Following deceased-donor graft assignment, orthotopic LT was performed according to standard procedures. Immunosuppressive therapy, including basiliximab induction, corticosteroids until day (D)7 and mofetil mycophenolate, was started immediately. Tacrolimus (with a target trough concentration of 8–10 ng/mL) was introduced on D3. Immunomonitoring Blood samples were collected before LT (at inclusion and then every 3 months until LT or earlier in case of acute events) and twice a week for 1 month following LT. We analysed mHLA-DR expression, lymphocyte subsets and T-cell function. mHLA-DR expression was measured via flow cytometry in fresh whole blood samples according to a standardized protocol . The results were obtained on a Navios Cytometer (Beckman Coulter, FL) and are expressed as the number of antibodies bound per cell (AB/C). Peripheral blood cell counts were performed to assess total lymphocytes, T-cell counts (CD3) and T-cell subsets (CD4, CD8) via flow cytometry. The cells were analysed on an AQUIOS cytometer (Beckman Coulter, FL) . T-cell function was assessed via a whole-blood interferon-γ release assay (IGRA). This antigen-independent test uses an enzyme-linked immunofluorescence assay (ELFA) to measure IFN-γ production in response to phytohemagglutinin A (PHA) stimulation. The results were obtained on a VIDAS-3 (bioMérieux, Marcy l’Etoile, France) and expressed as a reference fluorescence value (RFV). Statistics The results are expressed as medians and interquartile ranges (IQRs) or numbers and percentages . Univariate comparisons were performed via the Mann‒Whitney U test for two groups, the Kruskal‒Wallis test for more than two groups of continuous variables, and the chi‒square test or Fisher's exact test for categorical variables. For post-LT biological data analysis, we censored patients at the time of major immune event occurrence ( i.e. , treatment of acute cellular rejection or severe infection as defined above). Backwards stepwise multivariate analysis via a logistic regression model was performed to assess factors that predict post-LT outcomes (infections, acute graft rejection, and one-year mortality). Variables with a p value < 0.10 in the univariate analysis were included in the model. The area under the receiver operating characteristic (ROC) curve was constructed to identify optimal cut-off values for quantitative variables, defined as the value associated with the highest sum of sensitivity and specificity (Youden's index). In cases of collinearity between two variables, we selected the variable that resulted in the lowest Akaike information criterion (AIC) to ensure a better model fit. To identify patients with common post-LT mHLA-DR kinetics over time (trajectory endotypes), we used KmL—Kmeans for longitudinal data—R package 2.4.1 . The KmL method pipeline involves clustering marker trajectories using the k-means algorithm with a Gower adjusted Euclidean distance metric to handle missing data. For each number of clusters (ranging from 2 to 5), we ran the KmL method a thousand times to select the best clustering partition based on the highest Calinski-Harabasz metric, which compares within-cluster and between-cluster dispersion to evaluate partition quality. Since the Calinski-Harabasz metric is not tolerant of missing values, imputation is needed before its computation. Missing values within each cluster were imputed using linear interpolation to follow the cluster's population mean trajectory shape. After determining the best clustering partition for each number of clusters, we then used the Calinski-Harabasz metric again to select the optimal number of clusters. Survival curves were generated via Kaplan‒Meier estimates, and differences were compared via the log-rank test. R version 4.0.2. (R Core Team 2018, Vienna, Austria) and GraphPad Prism 6.0 (GraphPad Software, La Jolla California, USA) were used for all analyses. The significance level was set at p < 0.05. We conducted an observational, prospective and longitudinal study to assess the kinetics of immune parameters following LT. We consecutively enrolled patients from February 2020 to May 2023 in the EdMonHG study (monocytic expression of HLA-DR after liver transplantation, ClinicalTrials.gov identifier NCT03995537) at Lyon University Hospital ( Hospices Civils de Lyon ). The study was conducted in accordance with the Helsinki Declaration and approved by the Comité de Protection des Personnes Ile de France XI (approval number 19039–40433). Written informed consent was obtained from all participants prior to enrollment. The inclusion criteria included patients awaiting LT, with acute liver failure (ALF), compensated cirrhosis (compensated advanced chronic liver disease [cACLD] with hepatocellular carcinoma [HCC]), or decompensation of cirrhosis, with or without organ failure (decompensated advanced chronic liver disease [dALCD] including nonacute decompensation [N-AD], acute decompensation [AD] and ACLF). Patients receiving immunosuppressive therapy before LT (with the exception of corticosteroids) and patients without underlying liver disease were excluded. Patients awaiting multiorgan transplantation or retransplantation were also not eligible for the study. Postoperative infections were defined according to the American Society of Transplantation (Supplementary Data), and only significant infections were recorded (excluding uncomplicated cystitis and catheter colonization). The diagnosis of acute graft rejection was based on the presence of liver enzyme disturbances and histological criteria, according to the Banff schema for grading liver allograft rejection: an international consensus document, with a Banff score ≥ 4 . Postoperative complications and infections were analysed if they occurred within 1 month post-LT, and survival status was assessed at 1 year post-LT. Finally, patients finished the study 1 year after inclusion if no liver transplantation occurred at that time. Following deceased-donor graft assignment, orthotopic LT was performed according to standard procedures. Immunosuppressive therapy, including basiliximab induction, corticosteroids until day (D)7 and mofetil mycophenolate, was started immediately. Tacrolimus (with a target trough concentration of 8–10 ng/mL) was introduced on D3. Blood samples were collected before LT (at inclusion and then every 3 months until LT or earlier in case of acute events) and twice a week for 1 month following LT. We analysed mHLA-DR expression, lymphocyte subsets and T-cell function. mHLA-DR expression was measured via flow cytometry in fresh whole blood samples according to a standardized protocol . The results were obtained on a Navios Cytometer (Beckman Coulter, FL) and are expressed as the number of antibodies bound per cell (AB/C). Peripheral blood cell counts were performed to assess total lymphocytes, T-cell counts (CD3) and T-cell subsets (CD4, CD8) via flow cytometry. The cells were analysed on an AQUIOS cytometer (Beckman Coulter, FL) . T-cell function was assessed via a whole-blood interferon-γ release assay (IGRA). This antigen-independent test uses an enzyme-linked immunofluorescence assay (ELFA) to measure IFN-γ production in response to phytohemagglutinin A (PHA) stimulation. The results were obtained on a VIDAS-3 (bioMérieux, Marcy l’Etoile, France) and expressed as a reference fluorescence value (RFV). The results are expressed as medians and interquartile ranges (IQRs) or numbers and percentages . Univariate comparisons were performed via the Mann‒Whitney U test for two groups, the Kruskal‒Wallis test for more than two groups of continuous variables, and the chi‒square test or Fisher's exact test for categorical variables. For post-LT biological data analysis, we censored patients at the time of major immune event occurrence ( i.e. , treatment of acute cellular rejection or severe infection as defined above). Backwards stepwise multivariate analysis via a logistic regression model was performed to assess factors that predict post-LT outcomes (infections, acute graft rejection, and one-year mortality). Variables with a p value < 0.10 in the univariate analysis were included in the model. The area under the receiver operating characteristic (ROC) curve was constructed to identify optimal cut-off values for quantitative variables, defined as the value associated with the highest sum of sensitivity and specificity (Youden's index). In cases of collinearity between two variables, we selected the variable that resulted in the lowest Akaike information criterion (AIC) to ensure a better model fit. To identify patients with common post-LT mHLA-DR kinetics over time (trajectory endotypes), we used KmL—Kmeans for longitudinal data—R package 2.4.1 . The KmL method pipeline involves clustering marker trajectories using the k-means algorithm with a Gower adjusted Euclidean distance metric to handle missing data. For each number of clusters (ranging from 2 to 5), we ran the KmL method a thousand times to select the best clustering partition based on the highest Calinski-Harabasz metric, which compares within-cluster and between-cluster dispersion to evaluate partition quality. Since the Calinski-Harabasz metric is not tolerant of missing values, imputation is needed before its computation. Missing values within each cluster were imputed using linear interpolation to follow the cluster's population mean trajectory shape. After determining the best clustering partition for each number of clusters, we then used the Calinski-Harabasz metric again to select the optimal number of clusters. Survival curves were generated via Kaplan‒Meier estimates, and differences were compared via the log-rank test. R version 4.0.2. (R Core Team 2018, Vienna, Austria) and GraphPad Prism 6.0 (GraphPad Software, La Jolla California, USA) were used for all analyses. The significance level was set at p < 0.05. Patient clinical characteristics One hundred thirty patients were included. Over the study duration, 100 patients were transplanted: 99 patients underwent LT, and 1 patient underwent liver‒kidney transplantation; these patients were excluded from the analyses. Among the 30 remaining patients at 1 year, 15 patients died on the waiting list (WL), 4 were removed from WL due to clinical improvement, and 11 still awaited LT (Fig. ). Thirty patients were not transplanted after 1 year from inclusion (15 patients died before LT, 4 were removed from WL for improvement of disease, and 11 still remained on WL), 99 patients underwent liver transplantation, and 1 patient received combined kidney liver transplantation. LT: liver transplantation, WL: waiting list. LT recipients were mainly male (n = 80, 81%), with a median age of 56 years [48–61]. The median MELD score at LT was 20 [15–29]. dACLD accounted for 74 patients (23 with N-AD, 20 with AD, and 31 with ACLF), 20 patients exhibited cACLD, and 5 patients were admitted for ALF. The most common underlying liver disease in ACLD patients was alcohol-related liver disease (ALD, n = 69, 70%), followed by viral infections (n = 12, 12%), autoimmunity (n = 9, 9%), metabolic dysfunction-associated steatohepatitis (MASH) (n = 3, 3%) and progressive familial intrahepatic cholestasis (n = 1, 1%). ALF etiologies were acute hepatitis B virus infection (n = 2), autoimmune hepatitis (n = 1), post-traumatism ischemia (n = 1) and malignant hyperthermia (n = 1). Among the ACLF patients, the median number of organ failures (OFs) at inclusion was 2 [1–3], 7 (22,6%) patients presented with Grade 1 ACLF, 12 (38,7%) with Grade 2 ACLF and 12 patients (38,7%) with Grade 3 ACLF. With respect to OF, liver failure (20/31), coagulation failure (20/31) and kidney failure (10/31) were the most common. Thirty-four patients were hospitalized when called for LT (22 in the ICU). The donors were mainly men (n = 63, 64%), with a median age of 66 [51–72] years. The main causes of donor death were vascular (n = 49, 50%) and anoxic (n = 31, 31%), followed by trauma (n = 17, 17%). Seventeen grafts were donated after circulatory death (DCD). The median surgery time was 380 [309–458] minutes, and the median cold and warm ischemia times were 395 [328–468] and 37 [27–41] minutes, respectively. Recipients received a median of 3 [0–5] red blood cell units during surgery, and the median lactate peak was 4.3 mmol/l [3.2–7.3]. After LT, 12 patients received corticosteroids longer than one week because of the increased risk of acute graft rejection. Kinetics of immune parameters before and following LT Overall, following LT, immune parameters exhibited a similar pattern of evolution. In the first days post-LT, we observed a decrease in all the values compared with both the baseline values and the laboratory reference ranges. mHLA-DR expression then progressively increased until D30, returning above the lower limit of normal values (i.e., over 13,500 AB/C [24]) between D10 and D15 post-LT (Fig. A). The lymphocyte and T-cell counts increased from D1 to D7 but remained below normal values (i.e., under 1000 cells/µl and 595 cells/µl according to laboratory standards, respectively) until D30 (Fig. B, ). CD4 and CD8 T cells decreased after LT, reaching the lower limit of normal values (i.e., 336 cells/µl and 125 cells/µl, respectively) from D7‒D10 until D30 (Fig. D‒E). Finally, IFN-γ production levels were profoundly altered on D1 and remained low throughout the follow-up period (Fig. F). Association between immune parameters and clinical outcomes Association with the occurrence of infections At least one severe early post-LT infection occurred in 35 patients (35.4%). The median time to diagnosis was 9 [6–14] days. The most frequent infections were intra-abdominal infections (n = 19), followed by pneumonia (n = 15) and bacteremia (n = 2). Recipients, donors and transplantation characteristics according to the occurrence of infections are depicted in Table . Infected patients exhibited increased severity of pre-LT liver disease, organ failure at the time of LT and a greater number of RBCs transfused during LT. In terms of immune parameters, the mHLA-DR values were significantly lower from D5 to D15 (Supplementary Table 1, Fig. A) in patients who later developed infections. No other difference in T lymphocyte count or T-cell function was found at any time (Supplementary Table 1, Supplementary Fig. 1). Since mHLA-DR expression was the only immune parameter that differed between groups (i.e., no infection vs. forthcoming infection), we further assessed its predictive performance for early post-LT infections at each time point where significant differences were observed. On day 7, the area under the curve (AUC, from ROC analysis) was 0.80 (95% CI [0.70–0.90], p < 0.0001), with an optimal cut-off value of 11,000 AB/C (Se: 77%, Sp: 76%), as determined by the Youden index. On day 10, the AUC was 0.86 (95% CI [0.73–0.99], p < 0.0001), with an optimal cut-off value of 12,000 AB/C (Se: 85%, Sp: 79%). On day 15, the AUC was 0.94 (95% CI [0.88–1.00], p < 0.0005), with an optimal cut-off value of 13,000 AB/C (Se: 88%, Sp: 100%). Analyses were not performed for D5 because of the low number of patients. Next, we conducted a multivariate analysis to determine whether mHLA-DR remained an independent predictor of future infection when clinical confounders were included. For this purpose, we included MELD score at LT, number of red blood cell units transfused during LT, baseline mHLA-DR and D7 mHLA-DR levels in the model. The severity of liver disease, presence of organ failure at the time of LT, ACLF Grade and hospitalization status at the time of LT were also tested as alternatives to the MELD score (due to their collinearity) but demonstrated a poorer model fit. We focused on the D7 mHLA-DR value despite the lower AUC because this time point was the most relevant regarding the timing of infection events and allowed us to maximize the number of patients. As shown in Table , decreased mHLA-DR expression < 11 000 AB/C at D7 post-LT (odds ratio = 12.1 [4.4–38.2]) was independently associated with the occurrence of post-LT infections. A MELD score > 30 was also significantly associated with post-LT infections in the model (odds ratio = 4.9 [1.4–18.4])), whereas the number of red blood cell units transfused and baseline mHLA-DR expression were not. Infection-free survival curves, categorized by patients with D7 mHLA-DR levels below or above 11,000 AB/C, are depicted in Fig. B. These curves significantly demonstrated that lower post-LT mHLA-DR values were associated with a greater occurrence of infections. Lack of association of immune parameters with the occurrence of graft rejection Acute graft rejection was documented on liver biopsy in 14 patients, within a median of 9 [6–11] days after LT, with a median BANFF score of 5 [5, 6]. As shown in Supplementary Table 2, no clinical factors were associated with the occurrence of acute graft rejection, nor were any immune markers (Supplementary Fig. 2). Association with 1-year mortality Among LT patients, the 1-year survival rate was 91.9%. The patient, donor and transplantation characteristics according to 1-year mortality are described in Table . Nonsurvivors experienced more complications after their LT, including infections (75% vs. 32%, p = 0.04), surgical revisions (75% versus 31%, p = 0.03), and graft dysfunction (defined according to Olthoff's criteria , 75% vs. 27%, p = 0.02). However, nonsurvivors did not experience acute graft rejection (0% vs. 15%, p = 0.50). Immune alterations were more severe in patients who died within the first year after LT. Regarding mHLA-DR, differences were observed starting from D10 (median 4900 AB/C [4300–8800] vs 15,600 AB/C [9200–23000], respectively, in nonsurvivors and survivors, p = 0.002) through D30 (median 10,200 [8700–11700] versus 24,500 [18600–31300], respectively, in nonsurvivors and survivors, p = 0.03). No difference was found according to baseline mHLA-DR expression (Fig. C, Supp Table ). Total lymphocyte, T cell, CD4 and CD8 T-cell counts were also lower in nonsurvivors than in survivors but only at D10 (Supp Table , Supp Fig. ), whereas no difference in IFN-γ levels released following stimulation was found. At D10, the mHLA-DR AUC for the prediction of one-year post-LT mortality was 0.86 (95% CI [0.75–0.97], p = 0.001), with an optimal cut-off value of 9500 AB/C (Se: 86%, Sp: 73%). At D15, the mHLA-DR AUC was 0.75 (95% CI [0.55–0.95], p = 0.04), with an optimal cut-off value of 15,800 AB/C (Se: 83%, Sp: 53%). At D30, the mHLA-DR AUC was 0.96 (95% CI [0.91–1.00], p = 0.02), with an optimal cut-off value of 13 500 AB/C (Se: 100%, Sp: 94%). The survival curves categorized by D10 mHLA-DR levels (< 9500 AB/C) revealed a poorer prognosis in patients with low mHLA-DR values, based on an analysis of the 86 patients (out of the 99) for whom D10 mHLA-DR values were available). The small number of deceased patients (n = 8) prevented us from achieving sufficient statistical power to conduct a multivariate analysis. K-means clustering analysis Given the heterogeneity of post-LT mHLA-DR expression kinetics, we performed an in-depth K-means clustering analysis to identify distinct mHLA-DR expression patterns over time. This method allowed us to classify patients on the basis of their recovery trajectories, providing a clearer understanding of how immune status evolves after transplant and its impact on clinical outcomes (Fig. ). While all clusters started with mHLA-DR values below 10,000 AB/C, they primarily differed from each other in their recovery slope and thus the day on which their median values returned to normal levels (i.e., 13,500 AB/C). Cluster 1 (n = 35) started with a median value of 4100 AB/C [2900–5000] at D1 and reached the normal range by D20. Cluster 2 (n = 46) started with a median value of 6400 AB/C [5300–8900] and reached the normal range by D7. Cluster 3 (n = 15) started with a median value of 7800 AB/C [6200–9700] and reached the normal range by D3. Several recipient, donor, and transplantation characteristics were significantly associated with cluster distribution (Table ). Notably, while patients with pre-LT organ failure were predominant in Cluster 1, ALF and ACLF patients were also represented in Clusters 2 and 3 but in lower proportions. Consistent with previous findings (i.e., parameters analysed in a static context), the clusters also demonstrated significant differences in clinical outcomes. Cluster 1 had more infections and lower survival rates, Cluster 2 had less severe deterioration than did Cluster 1, and Cluster 3 had the best outcomes (Table , Fig. B, ). Most importantly, multivariate analysis revealed that belonging to Cluster 1 (compared with the other two clusters) was an independent parameter significantly associated with the occurrence of infections (odds ratio of 7.5, p < 0.001), as was having a MELD score > 30 at the time of the transplant (Table ). One hundred thirty patients were included. Over the study duration, 100 patients were transplanted: 99 patients underwent LT, and 1 patient underwent liver‒kidney transplantation; these patients were excluded from the analyses. Among the 30 remaining patients at 1 year, 15 patients died on the waiting list (WL), 4 were removed from WL due to clinical improvement, and 11 still awaited LT (Fig. ). Thirty patients were not transplanted after 1 year from inclusion (15 patients died before LT, 4 were removed from WL for improvement of disease, and 11 still remained on WL), 99 patients underwent liver transplantation, and 1 patient received combined kidney liver transplantation. LT: liver transplantation, WL: waiting list. LT recipients were mainly male (n = 80, 81%), with a median age of 56 years [48–61]. The median MELD score at LT was 20 [15–29]. dACLD accounted for 74 patients (23 with N-AD, 20 with AD, and 31 with ACLF), 20 patients exhibited cACLD, and 5 patients were admitted for ALF. The most common underlying liver disease in ACLD patients was alcohol-related liver disease (ALD, n = 69, 70%), followed by viral infections (n = 12, 12%), autoimmunity (n = 9, 9%), metabolic dysfunction-associated steatohepatitis (MASH) (n = 3, 3%) and progressive familial intrahepatic cholestasis (n = 1, 1%). ALF etiologies were acute hepatitis B virus infection (n = 2), autoimmune hepatitis (n = 1), post-traumatism ischemia (n = 1) and malignant hyperthermia (n = 1). Among the ACLF patients, the median number of organ failures (OFs) at inclusion was 2 [1–3], 7 (22,6%) patients presented with Grade 1 ACLF, 12 (38,7%) with Grade 2 ACLF and 12 patients (38,7%) with Grade 3 ACLF. With respect to OF, liver failure (20/31), coagulation failure (20/31) and kidney failure (10/31) were the most common. Thirty-four patients were hospitalized when called for LT (22 in the ICU). The donors were mainly men (n = 63, 64%), with a median age of 66 [51–72] years. The main causes of donor death were vascular (n = 49, 50%) and anoxic (n = 31, 31%), followed by trauma (n = 17, 17%). Seventeen grafts were donated after circulatory death (DCD). The median surgery time was 380 [309–458] minutes, and the median cold and warm ischemia times were 395 [328–468] and 37 [27–41] minutes, respectively. Recipients received a median of 3 [0–5] red blood cell units during surgery, and the median lactate peak was 4.3 mmol/l [3.2–7.3]. After LT, 12 patients received corticosteroids longer than one week because of the increased risk of acute graft rejection. Overall, following LT, immune parameters exhibited a similar pattern of evolution. In the first days post-LT, we observed a decrease in all the values compared with both the baseline values and the laboratory reference ranges. mHLA-DR expression then progressively increased until D30, returning above the lower limit of normal values (i.e., over 13,500 AB/C [24]) between D10 and D15 post-LT (Fig. A). The lymphocyte and T-cell counts increased from D1 to D7 but remained below normal values (i.e., under 1000 cells/µl and 595 cells/µl according to laboratory standards, respectively) until D30 (Fig. B, ). CD4 and CD8 T cells decreased after LT, reaching the lower limit of normal values (i.e., 336 cells/µl and 125 cells/µl, respectively) from D7‒D10 until D30 (Fig. D‒E). Finally, IFN-γ production levels were profoundly altered on D1 and remained low throughout the follow-up period (Fig. F). Association with the occurrence of infections At least one severe early post-LT infection occurred in 35 patients (35.4%). The median time to diagnosis was 9 [6–14] days. The most frequent infections were intra-abdominal infections (n = 19), followed by pneumonia (n = 15) and bacteremia (n = 2). Recipients, donors and transplantation characteristics according to the occurrence of infections are depicted in Table . Infected patients exhibited increased severity of pre-LT liver disease, organ failure at the time of LT and a greater number of RBCs transfused during LT. In terms of immune parameters, the mHLA-DR values were significantly lower from D5 to D15 (Supplementary Table 1, Fig. A) in patients who later developed infections. No other difference in T lymphocyte count or T-cell function was found at any time (Supplementary Table 1, Supplementary Fig. 1). Since mHLA-DR expression was the only immune parameter that differed between groups (i.e., no infection vs. forthcoming infection), we further assessed its predictive performance for early post-LT infections at each time point where significant differences were observed. On day 7, the area under the curve (AUC, from ROC analysis) was 0.80 (95% CI [0.70–0.90], p < 0.0001), with an optimal cut-off value of 11,000 AB/C (Se: 77%, Sp: 76%), as determined by the Youden index. On day 10, the AUC was 0.86 (95% CI [0.73–0.99], p < 0.0001), with an optimal cut-off value of 12,000 AB/C (Se: 85%, Sp: 79%). On day 15, the AUC was 0.94 (95% CI [0.88–1.00], p < 0.0005), with an optimal cut-off value of 13,000 AB/C (Se: 88%, Sp: 100%). Analyses were not performed for D5 because of the low number of patients. Next, we conducted a multivariate analysis to determine whether mHLA-DR remained an independent predictor of future infection when clinical confounders were included. For this purpose, we included MELD score at LT, number of red blood cell units transfused during LT, baseline mHLA-DR and D7 mHLA-DR levels in the model. The severity of liver disease, presence of organ failure at the time of LT, ACLF Grade and hospitalization status at the time of LT were also tested as alternatives to the MELD score (due to their collinearity) but demonstrated a poorer model fit. We focused on the D7 mHLA-DR value despite the lower AUC because this time point was the most relevant regarding the timing of infection events and allowed us to maximize the number of patients. As shown in Table , decreased mHLA-DR expression < 11 000 AB/C at D7 post-LT (odds ratio = 12.1 [4.4–38.2]) was independently associated with the occurrence of post-LT infections. A MELD score > 30 was also significantly associated with post-LT infections in the model (odds ratio = 4.9 [1.4–18.4])), whereas the number of red blood cell units transfused and baseline mHLA-DR expression were not. Infection-free survival curves, categorized by patients with D7 mHLA-DR levels below or above 11,000 AB/C, are depicted in Fig. B. These curves significantly demonstrated that lower post-LT mHLA-DR values were associated with a greater occurrence of infections. Lack of association of immune parameters with the occurrence of graft rejection Acute graft rejection was documented on liver biopsy in 14 patients, within a median of 9 [6–11] days after LT, with a median BANFF score of 5 [5, 6]. As shown in Supplementary Table 2, no clinical factors were associated with the occurrence of acute graft rejection, nor were any immune markers (Supplementary Fig. 2). Association with 1-year mortality Among LT patients, the 1-year survival rate was 91.9%. The patient, donor and transplantation characteristics according to 1-year mortality are described in Table . Nonsurvivors experienced more complications after their LT, including infections (75% vs. 32%, p = 0.04), surgical revisions (75% versus 31%, p = 0.03), and graft dysfunction (defined according to Olthoff's criteria , 75% vs. 27%, p = 0.02). However, nonsurvivors did not experience acute graft rejection (0% vs. 15%, p = 0.50). Immune alterations were more severe in patients who died within the first year after LT. Regarding mHLA-DR, differences were observed starting from D10 (median 4900 AB/C [4300–8800] vs 15,600 AB/C [9200–23000], respectively, in nonsurvivors and survivors, p = 0.002) through D30 (median 10,200 [8700–11700] versus 24,500 [18600–31300], respectively, in nonsurvivors and survivors, p = 0.03). No difference was found according to baseline mHLA-DR expression (Fig. C, Supp Table ). Total lymphocyte, T cell, CD4 and CD8 T-cell counts were also lower in nonsurvivors than in survivors but only at D10 (Supp Table , Supp Fig. ), whereas no difference in IFN-γ levels released following stimulation was found. At D10, the mHLA-DR AUC for the prediction of one-year post-LT mortality was 0.86 (95% CI [0.75–0.97], p = 0.001), with an optimal cut-off value of 9500 AB/C (Se: 86%, Sp: 73%). At D15, the mHLA-DR AUC was 0.75 (95% CI [0.55–0.95], p = 0.04), with an optimal cut-off value of 15,800 AB/C (Se: 83%, Sp: 53%). At D30, the mHLA-DR AUC was 0.96 (95% CI [0.91–1.00], p = 0.02), with an optimal cut-off value of 13 500 AB/C (Se: 100%, Sp: 94%). The survival curves categorized by D10 mHLA-DR levels (< 9500 AB/C) revealed a poorer prognosis in patients with low mHLA-DR values, based on an analysis of the 86 patients (out of the 99) for whom D10 mHLA-DR values were available). The small number of deceased patients (n = 8) prevented us from achieving sufficient statistical power to conduct a multivariate analysis. At least one severe early post-LT infection occurred in 35 patients (35.4%). The median time to diagnosis was 9 [6–14] days. The most frequent infections were intra-abdominal infections (n = 19), followed by pneumonia (n = 15) and bacteremia (n = 2). Recipients, donors and transplantation characteristics according to the occurrence of infections are depicted in Table . Infected patients exhibited increased severity of pre-LT liver disease, organ failure at the time of LT and a greater number of RBCs transfused during LT. In terms of immune parameters, the mHLA-DR values were significantly lower from D5 to D15 (Supplementary Table 1, Fig. A) in patients who later developed infections. No other difference in T lymphocyte count or T-cell function was found at any time (Supplementary Table 1, Supplementary Fig. 1). Since mHLA-DR expression was the only immune parameter that differed between groups (i.e., no infection vs. forthcoming infection), we further assessed its predictive performance for early post-LT infections at each time point where significant differences were observed. On day 7, the area under the curve (AUC, from ROC analysis) was 0.80 (95% CI [0.70–0.90], p < 0.0001), with an optimal cut-off value of 11,000 AB/C (Se: 77%, Sp: 76%), as determined by the Youden index. On day 10, the AUC was 0.86 (95% CI [0.73–0.99], p < 0.0001), with an optimal cut-off value of 12,000 AB/C (Se: 85%, Sp: 79%). On day 15, the AUC was 0.94 (95% CI [0.88–1.00], p < 0.0005), with an optimal cut-off value of 13,000 AB/C (Se: 88%, Sp: 100%). Analyses were not performed for D5 because of the low number of patients. Next, we conducted a multivariate analysis to determine whether mHLA-DR remained an independent predictor of future infection when clinical confounders were included. For this purpose, we included MELD score at LT, number of red blood cell units transfused during LT, baseline mHLA-DR and D7 mHLA-DR levels in the model. The severity of liver disease, presence of organ failure at the time of LT, ACLF Grade and hospitalization status at the time of LT were also tested as alternatives to the MELD score (due to their collinearity) but demonstrated a poorer model fit. We focused on the D7 mHLA-DR value despite the lower AUC because this time point was the most relevant regarding the timing of infection events and allowed us to maximize the number of patients. As shown in Table , decreased mHLA-DR expression < 11 000 AB/C at D7 post-LT (odds ratio = 12.1 [4.4–38.2]) was independently associated with the occurrence of post-LT infections. A MELD score > 30 was also significantly associated with post-LT infections in the model (odds ratio = 4.9 [1.4–18.4])), whereas the number of red blood cell units transfused and baseline mHLA-DR expression were not. Infection-free survival curves, categorized by patients with D7 mHLA-DR levels below or above 11,000 AB/C, are depicted in Fig. B. These curves significantly demonstrated that lower post-LT mHLA-DR values were associated with a greater occurrence of infections. Acute graft rejection was documented on liver biopsy in 14 patients, within a median of 9 [6–11] days after LT, with a median BANFF score of 5 [5, 6]. As shown in Supplementary Table 2, no clinical factors were associated with the occurrence of acute graft rejection, nor were any immune markers (Supplementary Fig. 2). Among LT patients, the 1-year survival rate was 91.9%. The patient, donor and transplantation characteristics according to 1-year mortality are described in Table . Nonsurvivors experienced more complications after their LT, including infections (75% vs. 32%, p = 0.04), surgical revisions (75% versus 31%, p = 0.03), and graft dysfunction (defined according to Olthoff's criteria , 75% vs. 27%, p = 0.02). However, nonsurvivors did not experience acute graft rejection (0% vs. 15%, p = 0.50). Immune alterations were more severe in patients who died within the first year after LT. Regarding mHLA-DR, differences were observed starting from D10 (median 4900 AB/C [4300–8800] vs 15,600 AB/C [9200–23000], respectively, in nonsurvivors and survivors, p = 0.002) through D30 (median 10,200 [8700–11700] versus 24,500 [18600–31300], respectively, in nonsurvivors and survivors, p = 0.03). No difference was found according to baseline mHLA-DR expression (Fig. C, Supp Table ). Total lymphocyte, T cell, CD4 and CD8 T-cell counts were also lower in nonsurvivors than in survivors but only at D10 (Supp Table , Supp Fig. ), whereas no difference in IFN-γ levels released following stimulation was found. At D10, the mHLA-DR AUC for the prediction of one-year post-LT mortality was 0.86 (95% CI [0.75–0.97], p = 0.001), with an optimal cut-off value of 9500 AB/C (Se: 86%, Sp: 73%). At D15, the mHLA-DR AUC was 0.75 (95% CI [0.55–0.95], p = 0.04), with an optimal cut-off value of 15,800 AB/C (Se: 83%, Sp: 53%). At D30, the mHLA-DR AUC was 0.96 (95% CI [0.91–1.00], p = 0.02), with an optimal cut-off value of 13 500 AB/C (Se: 100%, Sp: 94%). The survival curves categorized by D10 mHLA-DR levels (< 9500 AB/C) revealed a poorer prognosis in patients with low mHLA-DR values, based on an analysis of the 86 patients (out of the 99) for whom D10 mHLA-DR values were available). The small number of deceased patients (n = 8) prevented us from achieving sufficient statistical power to conduct a multivariate analysis. Given the heterogeneity of post-LT mHLA-DR expression kinetics, we performed an in-depth K-means clustering analysis to identify distinct mHLA-DR expression patterns over time. This method allowed us to classify patients on the basis of their recovery trajectories, providing a clearer understanding of how immune status evolves after transplant and its impact on clinical outcomes (Fig. ). While all clusters started with mHLA-DR values below 10,000 AB/C, they primarily differed from each other in their recovery slope and thus the day on which their median values returned to normal levels (i.e., 13,500 AB/C). Cluster 1 (n = 35) started with a median value of 4100 AB/C [2900–5000] at D1 and reached the normal range by D20. Cluster 2 (n = 46) started with a median value of 6400 AB/C [5300–8900] and reached the normal range by D7. Cluster 3 (n = 15) started with a median value of 7800 AB/C [6200–9700] and reached the normal range by D3. Several recipient, donor, and transplantation characteristics were significantly associated with cluster distribution (Table ). Notably, while patients with pre-LT organ failure were predominant in Cluster 1, ALF and ACLF patients were also represented in Clusters 2 and 3 but in lower proportions. Consistent with previous findings (i.e., parameters analysed in a static context), the clusters also demonstrated significant differences in clinical outcomes. Cluster 1 had more infections and lower survival rates, Cluster 2 had less severe deterioration than did Cluster 1, and Cluster 3 had the best outcomes (Table , Fig. B, ). Most importantly, multivariate analysis revealed that belonging to Cluster 1 (compared with the other two clusters) was an independent parameter significantly associated with the occurrence of infections (odds ratio of 7.5, p < 0.001), as was having a MELD score > 30 at the time of the transplant (Table ). LT is a key treatment for the management of patients with end-stage liver disease. However, infections remain a major complication in the post-transplant period and are the leading cause of early mortality, despite substantial advancements in the field. For example, a large retrospective study revealed that infections are the most frequent cause of death within 3 months post-LT for ACLF patients and the second most common cause of death for patients without pre-LT ACLF . Among the various risk factors for infections following LT, the nature and intensity of the immunosuppression protocol are obviously important considerations. Nevertheless, monitoring immune parameters is not yet routinely used for this purpose. In this context, the present study aimed to assess cellular immune functions via standardized tools available in routine care to investigate the associations between immune parameters and outcomes, particularly the occurrence of infections in patients undergoing LT. Overall, we observed that all immune parameters significantly decreased after LT, but the magnitude of these initial decreases was not associated with any specific outcome. More importantly, the kinetics of parameter restoration provided valuable insights. Among these parameters, mHLA-DR expression has emerged as the most informative. To the best of our knowledge, we showed for the first time that, after LT, delayed restoration of mHLA-DR expression was strongly and independently associated with poor outcomes, notably with the risk of developing early severe infections. In the present study, delayed restoration of mHLA-DR expression from day 5 onwards was associated with the occurrence of subsequent infections. This association was particularly strong from D7, where multivariate analysis (including all clinical confounders) identified mHLA-DR expression as a highly significant independent predictor of infection (OR: 12.1, p < 0.001), alongside the MELD score before LT. In addition to static analyses (i.e., time point by time point), the K-means clustering analysis revealed that mHLA-DR trajectories offered comparable insights into the slope of mHLA-DR restoration and the associated risk of infection. This highlights the interest in longitudinal monitoring of LT patients. The present work extends very preliminary studies obtained in the setting of transplantation, in which associations between low mHLA-DR levels and infectious risk have been reported in kidney and lung transplantation . Two previous studies, with very low numbers of patients (9 and 20, respectively), also suggested similar associations with LT . Given the substantial number of patients in our study (n = 99) and the improvements in several aspects, such as standardized measurements of mHLA-DR, clustering analysis, censorship of values once infection occurred, consideration of immune status before the transplant, and multivariate analysis, the present analysis strongly confirmed the association between post-LT mHLA-DR and infectious complications. LT differs from other organ transplantations because of the heterogeneity of pre-LT conditions, which arises from the diverse indications for transplantation. For example, in our cohort, 36 patients experienced pre-LT organ failure, a condition associated with severe CAID . It is hypothesized that pre-LT immune status may impact post-LT outcomes, which is why we considered it in our analysis. Although baseline mHLA-DR expression was not an independent predictor of infections, its level was significantly lower in patients with delayed post-LT immune recovery (Cluster 1). Pre-LT immune status and pre-LT severity may not be sufficient to predict post-LT outcome and immune recovery. Clustering analysis revealed that 40% of the ACLF patients, and notably one third of grade 3 ACLF patients, were allocated to Clusters 2 and 3 (i.e., standard and fast post-LT immune recovery, respectively), which were associated with fewer infections and greater survival. Taken together, these data support the hypothesis that post-LT mHLA-DR kinetics may provide additional information beyond initial severity for monitoring infection risk. The present work provides additional results: both static and clustering analyses highlighted an association between delayed mHLA-DR recovery and one-year mortality. Owing to the low number of cases, this aspect should be further assessed in larger cohorts. Additionally, no immune marker, including mHLA-DR, was associated with acute graft rejection in this study. Unexpectedly, post-LT lymphocyte count and function, assessed with an IFN-γ release assay, did not yield significant results for predicting outcome. This lack of association might be attributed to the homogenous immunosuppressive regimen, including anti-IL-2-R monoclonal antibodies, which target lymphocytes and potentially mask any underlying immune alterations related to post-LT outcomes. On the one hand, the unique immunosuppressive regimen administered to all patients included in our study allowed us to ensure patient homogeneity. On the other hand, this may represent a limitation of our study, implying that our results need to be further confirmed in cohorts with different immune suppressive strategies. The monocentric nature of our study and some of the scores used for the reported clinical outcomes (eg Olthoff’s EAD for graft dysfunction) represent another limitation to be acknowledged. A continuous evaluation of graft dysfunction, particularly using scores such as LGRAFT or EASE , could be relevant. As mentioned above, further studies are needed to determine how mHLA-DR expression monitoring may be incorporated into post-LT management. Specifically, larger cohorts are needed to validate the interest in post-LT mHLA-DR monitoring, regardless of immune suppressive regimens. These cohorts could be used to develop dynamic scores for the prediction of early infectious risk following LT. Recent studies have demonstrated that LT is an acceptable therapeutic strategy for critically ill patients, especially for Grade 3 ACLF patients . However, post-LT prognostic factors are still debated for these patients , and the development of individualized strategies for immune suppression and post-LT management may be useful in this setting. Since infections are among the most frequent causes of early death following LT , the identification of patients with a higher risk of infections could lead to the modulation of immune suppression to improve post-LT outcomes. Exploring these options in future clinical trials could provide valuable insights into optimizing posttransplant care. Moreover, understanding the underlying mechanisms driving different mHLA-DR trajectories and their impact on immune recovery can pave the way for novel targeted therapeutic interventions. This study provides the first longitudinal monitoring of mHLA-DR expression before and after LT and its association with clinical outcomes. Delayed mHLA-DR restoration, whether measured at specific time points or assessed through trajectory clustering, was a significant independent predictor of future infection, as were high pre-LT MELD scores. These findings underscore the importance of early immune monitoring and suggest the benefit of individualized transplant management to improve outcomes. Additional studies are warranted to validate these findings in multicenter settings with diverse immunosuppression protocols. Additional file 1. Additional file 2. Additional file 3. |
Theranostic cells: emerging clinical applications of synthetic biology | c84f7a4c-0ca1-4bb6-9d8f-8db6d8319146 | 8261392 | Pathology[mh] | Current methods for diagnosing and treating disease are hampered by their inability to respond locally and dynamically to disease states. Many diagnostic approaches necessitate invasive biopsies and subsequent pathological analysis – . Therapeutics face the challenge of administration without real-time knowledge of the internal diseased state. Despite recent advances in targeting different diseases, tissues or cell types of interest , many biological-based therapeutics act systemically, thereby increasing the risk of and potentially reducing patient compliance . , a field that strives to engineer biology to perform user-defined functions, is well poised to meet the need for new classes of diagnostics and therapeutics. Early advances in synthetic biology led to the creation of prokaryotic cells capable of performing complex computations, whereby they produce differential output based on external signals . Combining synthetic biology with concurrent advances in protein engineering led to the creation of cells that could use synthetic receptors to activate native pathways . These systems laid the groundwork for building ‘theranostic’ cells, which can serve as both diagnostic tools and therapeutic delivery systems. Theranostic cells are engineered to express sensors that detect the presence of a disease marker (for example, a cell surface receptor that targets a ligand) and signalling machinery that precisely controls a cellular response (for example, therapeutic protein expression or cell killing) . Relative to small molecules and biologics, which generally act systemically and in an untimed manner, these therapies enable more precise control as they should only activate upon sensing the target biomarker. A major milestone in the field of theranostic cell engineering was the 2017 FDA approval of tisagenlecleucel (Kymriah), the first gene therapy to be approved in the USA . Tisagenlecleucel is a (CAR) T cell therapy. It consists of immune cells taken from the patient, which are then engineered to express receptors that target B cell precursor acute lymphoblastic leukaemia. Since then, three other CAR T cell therapies — axicabtagene ciloleucel (Yescarta) , brexucabtagene autoleucel (Tecartus) and lisocabtagene maraleucel (Breyanzi) — have been approved to treat different types of blood cancer. These therapies all demonstrate the potential of cell-based therapies as a new treatment modality. Building on this success, many academic laboratories and companies are developing cell therapies that are more effective, safe and applicable to a wide variety of diseases. Our increased understanding of how cells function, combined with technological advances over the past decade, has expedited cell diagnostic and therapeutic development. For instance, research into the gut microbiome has illuminated the integral and complex role that microorganisms play in regulating physiology , and advances in microbial engineering have enabled the creation of cells that can dynamically regulate this internal microbial ecosystem . Similarly, genome editing techniques, such as CRISPR–Cas technologies, have led to more precise and potentially safer methods to introduce targeted edits into the human genome, a critical step for mitigating oncogenic adverse effects associated with random genomic integration of other gene editing methods such as viral vectors. More advanced cloning techniques such as Gibson assembly and dramatically reduced costs of DNA synthesis have enabled the development of new biological ‘parts’ in both prokaryotic and mammalian systems, significantly reducing the ‘design–build–test’ cycle. High-throughput sequencing has driven rapid and inexpensive organism characterization, and thus faster subsequent engineering . Advances in robotics and high-throughput screening have helped to automate and streamline the construction and evaluation of engineered systems. Finally, advances in in vitro co-culture methods have enabled more robust and rapid characterization of the ways that different cell types interact with each other in a simulated complex environment – . In this Review, we address recent advances in the applications of bacterial and mammalian cell diagnostics and therapeutics (Fig. ). Whereas previous reviews have focused on these areas separately – , here we provide a broad overview across bacterial and mammalian systems and discuss systems that have been engineered for safer and more effective clinical use. We focus mainly on cellular applications but briefly touch on and viral therapies. First, we discuss bacterial diagnostics and therapeutics, focusing on engineering approaches that have enabled cells to function in the body over extended time periods, and give examples of engineered that have recently advanced to clinical trials. Then, we explore recent advances in mammalian cell engineering, focusing on ways that chimeric receptors can be engineered to create theranostic cells that modulate the immune system. We conclude by offering our outlook on the challenges that engineered cell diagnostics and therapeutics still face and the advances required for engineered cells to become a new pillar of modern diagnostics and therapeutics. The earliest work in synthetic biology used well-studied systems to engineer microorganisms to respond predictably to environmental changes , . Since then, a plethora of engineered sensors and more advanced have expanded the scope of compounds that microorganisms can sense and the computations that they can perform, resulting in microorganism-based systems with industrial, health and environmental applications . More recently, bacterial sensors have been engineered to function in biological samples (for example, serum and urine) and even within the body, enabling them to serve as low-cost, minimally invasive diagnostics and theranostics that produce a therapeutic output upon sensing a diseased state. We differentiate between ex vivo diagnostics, which are used outside the body, and in vivo diagnostics, which are used inside the body. Ex vivo diagnostics Current ‘gold-standard’ methods to analyse compounds in the body such as ions, metabolites and peptides require the use of advanced machinery and extensive sample processing , . By harnessing microorganisms’ natural sense and respond machinery, biosensors offer a low-cost and potentially more accessible testing alternative. Microorganisms can be engineered to sense target compounds and produce visibly coloured changes in response, serving as a ‘litmus test’ for disease. Such sensors could enable fast and low-cost diagnoses, potentially at the point of care. Whole-cell diagnostics Nearly all ex vivo microbial diagnostics produce an easily detectable output — either a fluorescent protein or a visible pigment — upon recognition of a target signal. Multiple groups have engineered Escherichia coli cells to sense and respond to analytes such as micronutrients and sugars – by harnessing that naturally sense these molecules to control expression of colour-based reporters. For example, the zinc-responsive transcription factors Zur and ZntR can be used to control production of visible pigments, such that cells change to a different colour based on the zinc concentration in serum , (Fig. ). Similarly, the sugar-responsive promoter P cpxP controls the production of fluorescent proteins and serves as the basis of a test for glycosuria (indicative of diabetes onset) . To enable clinical use, the sensing systems can be tuned to respond to physiologically relevant concentrations of the target biomarker. For example, a biosensor for zinc deficiency initially responded to serum zinc levels that were far lower than those that are clinically useful. To shift the response to a higher zinc concentration, a transcriptional repressor was placed under control of a zinc-responsive promoter, such that the repressor is made (and thus the colour turned off) only at sufficiently high levels of zinc. The response threshold can be further tuned by modifying the half-life of the repressor: lower levels of the repressor correspond with higher response thresholds . To enable tests to function in biological samples such as serum and urine, the form factor — that is, the way in which engineered cells are used for sample testing — of the test can be modified. For example, implantation of cells within a hydrogel prevents dilution and loss of signal in urine . Similarly, using highly concentrated sensor cells prevents bacterial death in serum . Beyond detecting molecular biomarkers, bacterial sensors can report on the presence of pathogenic bacteria via , which bacteria naturally use to coordinate population-level responses . For example, E. coli cells engineered to express quorum-sensing proteins from Vibrio cholerae can be used to monitor the presence and proliferation of V. cholerae . Similarly, yeast GPCR pheromone sensors have been used to report on the presence of pathogenic fungi . However, despite reported laboratory successes, ex vivo microbial diagnostics have yet to be used clinically, in part because of regulatory challenges associated with using engineered organisms as diagnostics . Cell-free diagnostics Cell-free systems, which consist of a mixture of nucleic acids, metabolites and proteins, have recently emerged as another biological-based sensing platform . Cell-free systems have the same fundamental transcription and translation machinery as whole cells and can be engineered to detect diverse biomarkers and produce results within minutes of sample addition (Fig. ). These have been used to detect viral biomarkers, such as nucleic acids derived from Ebola , Zika and SARS-CoV-2 (ref. ) as well as small molecules such as zinc (which reflects nutrition levels) or quorum-sensing molecules secreted from pathogenic bacteria (which indicate the degree of infection) . An advantage of both microbial and cell-free systems for ex vivo analysis is their ability to function in diverse environments and to produce easily detectable outputs. Sensors can be lyophilized and stored at ambient temperatures for long periods of time, and upon reconstitution with a biological sample they can produce visibly coloured reporters , , . This supports the use of these diagnostics in low-resource settings, as they can be shipped to remote regions of the world or easily sold from a pharmacy, then used and interpreted with minimal or no equipment. The safety and logistical considerations to such use will be discussed in the later part of this Review. In vivo diagnostics As bacteria naturally live in symbiosis with the human body, they can be harnessed to serve as in vivo diagnostics, reporting on internal biomarkers in a minimally invasive fashion. Current in vivo diagnostics have been used to detect cancer and inflammation and to monitor gut function and regulation in real time . Microbiome diagnostics The gut microbiome has become an engineering hotspot, as the growing pool of microbiome research has revealed its critical role in maintaining proper immune and digestive function and in drug metabolism . As bacteria naturally colonize the gastrointestinal tract (termed the gut), they have the potential to serve as stable and long-term reporters of its state. Gut inflammation is a hallmark of diseases such as inflammatory bowel disease and Crohn’s disease, but real-time monitoring of inflammation has been difficult, in part, because of a lack of reliable biomarkers in easily accessible samples : traditional markers of inflammation such as CRP (analysed from blood samples) and calprotectin (analysed from stool) are not specific to inflammation of the gut and have high variability. Biomarkers indicative of the reactive oxygen species (ROS) produced in the gut during inflammation would be more valuable, but indicators of ROS, such as tetrathionate , are transient and cannot be detected without invasive procedures. To monitor inflammation in the mouse gut in a non-invasive way, a commensal murine strain of E. coli (NGF1) was engineered to internally sense and record tetrathionate exposure . This engineering approach connects a tetrathionate sensor to a transcriptional element that then continually produces the reporter β-galactosidase. When stool samples from mice that have ingested engineered bacteria are collected and plated, they show β-galactosidase activity based on mouse gut inflammation (Fig. ). Information on the time course of disease progression could be valuable both for better understanding the pathogenesis of gut inflammation and for developing more efficient treatments. To this end, the repressilator, a fundamental synthetic biology tool, was harnessed to create a ‘bacterial clock’ that provides information on cellular activity in the gut. The repressilator functions by using three orthogonal promoter–repressor pairs to control three differentially fluorescent proteins; expression of each protein turns on in a controlled and predictable fashion . When fed to mice and subsequently analysed, these engineered bacteria can report on cellular growth rate and abnormal conditions (such as gut inflammation) that can disrupt standard transcriptional oscillations . Similar systems could be used to dynamically modulate the gut microbiome; the resulting theranostics are described in subsequent sections. Real-time reporting The interplay between nanotechnology and biotechnology has led to the development of devices that can transmit signals from inside the body, generating real-time health reports. For example, bacteria were engineered to produce luciferase upon detection of clinically relevant biomarkers, such as haemoglobin, thiosulfate and molecules indicative of specific bacterial strains . These engineered bacteria were embedded in an ingestible electronic capsule that processed the light produced from the bacteria and transmitted the information via radio waves to a phone or computer outside the body (Fig. ). The capsule can safely migrate through the digestive tract, providing real-time information on the insults encountered through the capsule’s journey. This approach has been successfully used to assess blood in the gastrointestinal tract of a pig but has yet to be tested in humans. Current ‘gold-standard’ methods to analyse compounds in the body such as ions, metabolites and peptides require the use of advanced machinery and extensive sample processing , . By harnessing microorganisms’ natural sense and respond machinery, biosensors offer a low-cost and potentially more accessible testing alternative. Microorganisms can be engineered to sense target compounds and produce visibly coloured changes in response, serving as a ‘litmus test’ for disease. Such sensors could enable fast and low-cost diagnoses, potentially at the point of care. Whole-cell diagnostics Nearly all ex vivo microbial diagnostics produce an easily detectable output — either a fluorescent protein or a visible pigment — upon recognition of a target signal. Multiple groups have engineered Escherichia coli cells to sense and respond to analytes such as micronutrients and sugars – by harnessing that naturally sense these molecules to control expression of colour-based reporters. For example, the zinc-responsive transcription factors Zur and ZntR can be used to control production of visible pigments, such that cells change to a different colour based on the zinc concentration in serum , (Fig. ). Similarly, the sugar-responsive promoter P cpxP controls the production of fluorescent proteins and serves as the basis of a test for glycosuria (indicative of diabetes onset) . To enable clinical use, the sensing systems can be tuned to respond to physiologically relevant concentrations of the target biomarker. For example, a biosensor for zinc deficiency initially responded to serum zinc levels that were far lower than those that are clinically useful. To shift the response to a higher zinc concentration, a transcriptional repressor was placed under control of a zinc-responsive promoter, such that the repressor is made (and thus the colour turned off) only at sufficiently high levels of zinc. The response threshold can be further tuned by modifying the half-life of the repressor: lower levels of the repressor correspond with higher response thresholds . To enable tests to function in biological samples such as serum and urine, the form factor — that is, the way in which engineered cells are used for sample testing — of the test can be modified. For example, implantation of cells within a hydrogel prevents dilution and loss of signal in urine . Similarly, using highly concentrated sensor cells prevents bacterial death in serum . Beyond detecting molecular biomarkers, bacterial sensors can report on the presence of pathogenic bacteria via , which bacteria naturally use to coordinate population-level responses . For example, E. coli cells engineered to express quorum-sensing proteins from Vibrio cholerae can be used to monitor the presence and proliferation of V. cholerae . Similarly, yeast GPCR pheromone sensors have been used to report on the presence of pathogenic fungi . However, despite reported laboratory successes, ex vivo microbial diagnostics have yet to be used clinically, in part because of regulatory challenges associated with using engineered organisms as diagnostics . Cell-free diagnostics Cell-free systems, which consist of a mixture of nucleic acids, metabolites and proteins, have recently emerged as another biological-based sensing platform . Cell-free systems have the same fundamental transcription and translation machinery as whole cells and can be engineered to detect diverse biomarkers and produce results within minutes of sample addition (Fig. ). These have been used to detect viral biomarkers, such as nucleic acids derived from Ebola , Zika and SARS-CoV-2 (ref. ) as well as small molecules such as zinc (which reflects nutrition levels) or quorum-sensing molecules secreted from pathogenic bacteria (which indicate the degree of infection) . An advantage of both microbial and cell-free systems for ex vivo analysis is their ability to function in diverse environments and to produce easily detectable outputs. Sensors can be lyophilized and stored at ambient temperatures for long periods of time, and upon reconstitution with a biological sample they can produce visibly coloured reporters , , . This supports the use of these diagnostics in low-resource settings, as they can be shipped to remote regions of the world or easily sold from a pharmacy, then used and interpreted with minimal or no equipment. The safety and logistical considerations to such use will be discussed in the later part of this Review. Nearly all ex vivo microbial diagnostics produce an easily detectable output — either a fluorescent protein or a visible pigment — upon recognition of a target signal. Multiple groups have engineered Escherichia coli cells to sense and respond to analytes such as micronutrients and sugars – by harnessing that naturally sense these molecules to control expression of colour-based reporters. For example, the zinc-responsive transcription factors Zur and ZntR can be used to control production of visible pigments, such that cells change to a different colour based on the zinc concentration in serum , (Fig. ). Similarly, the sugar-responsive promoter P cpxP controls the production of fluorescent proteins and serves as the basis of a test for glycosuria (indicative of diabetes onset) . To enable clinical use, the sensing systems can be tuned to respond to physiologically relevant concentrations of the target biomarker. For example, a biosensor for zinc deficiency initially responded to serum zinc levels that were far lower than those that are clinically useful. To shift the response to a higher zinc concentration, a transcriptional repressor was placed under control of a zinc-responsive promoter, such that the repressor is made (and thus the colour turned off) only at sufficiently high levels of zinc. The response threshold can be further tuned by modifying the half-life of the repressor: lower levels of the repressor correspond with higher response thresholds . To enable tests to function in biological samples such as serum and urine, the form factor — that is, the way in which engineered cells are used for sample testing — of the test can be modified. For example, implantation of cells within a hydrogel prevents dilution and loss of signal in urine . Similarly, using highly concentrated sensor cells prevents bacterial death in serum . Beyond detecting molecular biomarkers, bacterial sensors can report on the presence of pathogenic bacteria via , which bacteria naturally use to coordinate population-level responses . For example, E. coli cells engineered to express quorum-sensing proteins from Vibrio cholerae can be used to monitor the presence and proliferation of V. cholerae . Similarly, yeast GPCR pheromone sensors have been used to report on the presence of pathogenic fungi . However, despite reported laboratory successes, ex vivo microbial diagnostics have yet to be used clinically, in part because of regulatory challenges associated with using engineered organisms as diagnostics . Cell-free systems, which consist of a mixture of nucleic acids, metabolites and proteins, have recently emerged as another biological-based sensing platform . Cell-free systems have the same fundamental transcription and translation machinery as whole cells and can be engineered to detect diverse biomarkers and produce results within minutes of sample addition (Fig. ). These have been used to detect viral biomarkers, such as nucleic acids derived from Ebola , Zika and SARS-CoV-2 (ref. ) as well as small molecules such as zinc (which reflects nutrition levels) or quorum-sensing molecules secreted from pathogenic bacteria (which indicate the degree of infection) . An advantage of both microbial and cell-free systems for ex vivo analysis is their ability to function in diverse environments and to produce easily detectable outputs. Sensors can be lyophilized and stored at ambient temperatures for long periods of time, and upon reconstitution with a biological sample they can produce visibly coloured reporters , , . This supports the use of these diagnostics in low-resource settings, as they can be shipped to remote regions of the world or easily sold from a pharmacy, then used and interpreted with minimal or no equipment. The safety and logistical considerations to such use will be discussed in the later part of this Review. As bacteria naturally live in symbiosis with the human body, they can be harnessed to serve as in vivo diagnostics, reporting on internal biomarkers in a minimally invasive fashion. Current in vivo diagnostics have been used to detect cancer and inflammation and to monitor gut function and regulation in real time . Microbiome diagnostics The gut microbiome has become an engineering hotspot, as the growing pool of microbiome research has revealed its critical role in maintaining proper immune and digestive function and in drug metabolism . As bacteria naturally colonize the gastrointestinal tract (termed the gut), they have the potential to serve as stable and long-term reporters of its state. Gut inflammation is a hallmark of diseases such as inflammatory bowel disease and Crohn’s disease, but real-time monitoring of inflammation has been difficult, in part, because of a lack of reliable biomarkers in easily accessible samples : traditional markers of inflammation such as CRP (analysed from blood samples) and calprotectin (analysed from stool) are not specific to inflammation of the gut and have high variability. Biomarkers indicative of the reactive oxygen species (ROS) produced in the gut during inflammation would be more valuable, but indicators of ROS, such as tetrathionate , are transient and cannot be detected without invasive procedures. To monitor inflammation in the mouse gut in a non-invasive way, a commensal murine strain of E. coli (NGF1) was engineered to internally sense and record tetrathionate exposure . This engineering approach connects a tetrathionate sensor to a transcriptional element that then continually produces the reporter β-galactosidase. When stool samples from mice that have ingested engineered bacteria are collected and plated, they show β-galactosidase activity based on mouse gut inflammation (Fig. ). Information on the time course of disease progression could be valuable both for better understanding the pathogenesis of gut inflammation and for developing more efficient treatments. To this end, the repressilator, a fundamental synthetic biology tool, was harnessed to create a ‘bacterial clock’ that provides information on cellular activity in the gut. The repressilator functions by using three orthogonal promoter–repressor pairs to control three differentially fluorescent proteins; expression of each protein turns on in a controlled and predictable fashion . When fed to mice and subsequently analysed, these engineered bacteria can report on cellular growth rate and abnormal conditions (such as gut inflammation) that can disrupt standard transcriptional oscillations . Similar systems could be used to dynamically modulate the gut microbiome; the resulting theranostics are described in subsequent sections. Real-time reporting The interplay between nanotechnology and biotechnology has led to the development of devices that can transmit signals from inside the body, generating real-time health reports. For example, bacteria were engineered to produce luciferase upon detection of clinically relevant biomarkers, such as haemoglobin, thiosulfate and molecules indicative of specific bacterial strains . These engineered bacteria were embedded in an ingestible electronic capsule that processed the light produced from the bacteria and transmitted the information via radio waves to a phone or computer outside the body (Fig. ). The capsule can safely migrate through the digestive tract, providing real-time information on the insults encountered through the capsule’s journey. This approach has been successfully used to assess blood in the gastrointestinal tract of a pig but has yet to be tested in humans. The gut microbiome has become an engineering hotspot, as the growing pool of microbiome research has revealed its critical role in maintaining proper immune and digestive function and in drug metabolism . As bacteria naturally colonize the gastrointestinal tract (termed the gut), they have the potential to serve as stable and long-term reporters of its state. Gut inflammation is a hallmark of diseases such as inflammatory bowel disease and Crohn’s disease, but real-time monitoring of inflammation has been difficult, in part, because of a lack of reliable biomarkers in easily accessible samples : traditional markers of inflammation such as CRP (analysed from blood samples) and calprotectin (analysed from stool) are not specific to inflammation of the gut and have high variability. Biomarkers indicative of the reactive oxygen species (ROS) produced in the gut during inflammation would be more valuable, but indicators of ROS, such as tetrathionate , are transient and cannot be detected without invasive procedures. To monitor inflammation in the mouse gut in a non-invasive way, a commensal murine strain of E. coli (NGF1) was engineered to internally sense and record tetrathionate exposure . This engineering approach connects a tetrathionate sensor to a transcriptional element that then continually produces the reporter β-galactosidase. When stool samples from mice that have ingested engineered bacteria are collected and plated, they show β-galactosidase activity based on mouse gut inflammation (Fig. ). Information on the time course of disease progression could be valuable both for better understanding the pathogenesis of gut inflammation and for developing more efficient treatments. To this end, the repressilator, a fundamental synthetic biology tool, was harnessed to create a ‘bacterial clock’ that provides information on cellular activity in the gut. The repressilator functions by using three orthogonal promoter–repressor pairs to control three differentially fluorescent proteins; expression of each protein turns on in a controlled and predictable fashion . When fed to mice and subsequently analysed, these engineered bacteria can report on cellular growth rate and abnormal conditions (such as gut inflammation) that can disrupt standard transcriptional oscillations . Similar systems could be used to dynamically modulate the gut microbiome; the resulting theranostics are described in subsequent sections. The interplay between nanotechnology and biotechnology has led to the development of devices that can transmit signals from inside the body, generating real-time health reports. For example, bacteria were engineered to produce luciferase upon detection of clinically relevant biomarkers, such as haemoglobin, thiosulfate and molecules indicative of specific bacterial strains . These engineered bacteria were embedded in an ingestible electronic capsule that processed the light produced from the bacteria and transmitted the information via radio waves to a phone or computer outside the body (Fig. ). The capsule can safely migrate through the digestive tract, providing real-time information on the insults encountered through the capsule’s journey. This approach has been successfully used to assess blood in the gastrointestinal tract of a pig but has yet to be tested in humans. Naturally occurring bacteria have been used extensively as probiotics for years, and synthetic biology has enabled the creation of engineered probiotics that can treat specific diseases or conditions. Bacteria can be programmed to release therapeutics upon sensing a target compound. In this manner, bacteria have been used to modulate cancer progression, metabolic disorders and microbiome dysbiosis. Several bacteria-based therapeutic systems have advanced to clinical trials. Cancer therapeutics Bacteria for tumour targeting Bacteria have long been explored as potential cancer treatments: in the 1800s, an injection of streptococcal bacteria shrank a malignant tumour , and in the 1970s bacillus Calmette–Guérin, an attenuated strain of Mycobacterium bovis , was approved to treat bladder cancer. More recently, Salmonella typhimurium has gained attention because it preferentially colonizes necrotic and hypoxic tumour microenvironments. The oxygen-deprived, immune-privileged environment is conducive to anaerobic bacterial growth, which subsequently induces host immune responses to target the bacteria and tumours in a cancer antigen-independent fashion . In the past, S. typhimurium has been involved in numerous phase I trials to treat cancers such as melanoma. However, the treatments were ineffective in humans, and failures were attributed to poor tumour targeting and dose-related toxicity . More recently, treatments utilizing S. typhimurium in combination with chemotherapy drugs have been investigated, and one that targets pancreatic cancer has advanced to a phase II clinical study (NCT04589234). Additional genetically tractable obligate and facultative anaerobes, including Bifidobacterium , Escherichia and Clostridium , were genetically modified to increase tumour specificity by, for example, expressing tumour-targeting peptides or antibodies on the cell surface . Bacteria can deliver various anticancer effectors upon sensing a diseased state. In general, these therapies function by placing the gene encoding an effector molecule under the control of a promoter that responds to a tumour-specific signal (Fig. ). S. typhimurium was engineered to produce a cytolysin protein HlyE upon sensing hypoxia, which resulted in reduced tumour volume when tested in vivo . Similarly, E. coli strains have been engineered to produce antitumour proteins upon sensing a specific cell density, low oxygen levels or decreasing glucose gradients . These sensors have been coupled to additional effector molecules such as prodrug-cleaving enzymes or short interfering RNAs that suppress tumour growth . A phase I clinical trial (NCT01562626) is currently testing whether Bifidobacterium longum that expresses the prodrug-converting enzyme cytosine deaminase enhances the efficacy of flucytosine-based treatment of solid tumours; the cytosine deaminase is expected to convert flucytosine into the standard chemotherapy drug 5-fluorouracil at the site of the tumour. Although these therapeutic systems are relatively straightforward, tuning activity is an ongoing challenge, as it is critical that they respond to the appropriate signal threshold and generate appropriate levels of effector molecules. Dynamic delivery of anticancer drugs To effectively and safely treat cancer, bacteria must be able to deliver the anticancer payload in a controlled fashion and to autoregulate their replication rates. Inducible autolysis has been explored as a strategy to both release a drug and maintain a stable bacterial population , . This approach harnesses quorum-sensing systems. When the concentration of acyl-homoserine lactone (AHL), a quorum-sensing molecule, is low, the cells divide and produce an anticancer drug. As the bacterial density increases, the concentration of AHL reaches a threshold that activates autolysis, releasing the anticancer protein into the tumour microenvironment (Fig. ). Mice injected with these engineered cells showed significant reduction of tumour volume compared with effector alone and cell-only controls . In conclusion, the preliminary bacteria-based anticancer treatments discussed here hold promise to specifically target and kill cancerous cells. Although therapies that use more advanced genetic circuits are still in preclinical development, many bacteria-based cancer therapies have advanced through phase I clinical trials (Table ). Limitations to bacterial cancer therapies Bacteria-based treatments that yield effective results in humans, especially strains with complex, engineered gene networks, remain limited. Balancing the fitness of the bacteria, maintaining stability of the introduced gene circuit, attenuating virulence and increasing target specificity in vivo remain grand challenges to developing bacteria-based cancer therapies. Furthermore, bacterial treatment of cancers (such as leukaemia) that do not form solid tumours conducive to bacterial colonization would likely be ineffective and dangerous, as such treatments would require high concentrations of bacteria in the bloodstream. Treatments in these cases would likely rely on employing engineered mammalian cells, such as those discussed in subsequent sections. Finally, in some cases it is known that tumours contain their own natural microbiome that influences cancer progression. These tumour-specific microbial communities are highly variable between patients , , and their potentially different effects on therapeutic performance must be taken into consideration during strain selection and engineering . Gut therapeutics Engineered microorganisms can modulate the gut microbiome by sensing biomarker levels, providing potential treatments for gut , inflammation and metabolic diseases . Most current therapies require ingestion of engineered bacteria, but efforts are being made to modify microorganisms in vivo – , which could expand the scope of therapeutic applications. Gut modulation with engineered bacteria The gut is a prime target for bacterial therapeutics because bacteria naturally colonize the gut and because the gut microbiome plays an important role in modulating diseases such as obesity, diabetes, inflammatory diseases and cancer . E. coli Nissle 1917 is a popular for therapeutic engineering because it is non-pathogenic and easy to engineer, and has a naturally positive effect on the gut microbiome. Other strains, such as Lactobacillus , Clostridium and Bacteroides , have also shown promise in therapeutic development . Metabolic diseases are a prime target for dynamic modulation, as bacteria can be readily engineered to process the accumulated metabolite. However, these efforts have had mixed results. Hyperammonaemia is a disease characterized by excess ammonia accumulation in the blood, resulting from defective enzymes in the urea cycle. An E. coli Nissle strain was engineered to assimilate ammonia and sequester the nitrogen into the amino acid l -arginine . Administration of this engineered bacteria to mice with hyperammonaemia reduced blood ammonia levels and improved survival. It completed phase I clinical trials (NCT03179878), but was terminated owing to ineffectiveness in lowering blood ammonia in humans. A similar strategy was used to address phenylketonuria, a genetic disease caused by an inability to metabolize l -phenylalanine ( l -Phe) (Fig. ). E. coli Nissle engineered to convert l -Phe into other metabolites resulted in increased l -Phe metabolism in monkeys , a strategy that recently passed phase I clinical trials (NCT03516487) and is on track for testing in phase II trials. Engineered bacteria could also be used to control the composition of the gut microbiome and eliminate pathogenic bacteria. Commensal E. coli Nissle were engineered to target Pseudomonas aeruginosa , a bacterium that can cause serious infection . The E. coli cells contain a genetic circuit encoding antimicrobial peptides and a -degrading enzyme. Upon detecting the P. aeruginosa quorum-sensing compound, the engineered cells produce the peptide and enzyme (Fig. ). Co-culture of the two strains reduces P. aeruginosa viability and biofilm content. In a mouse infection model, administering the engineered E. coli led to ~70% reduction of P. aeruginosa colonization, providing a viable antimicrobial strategy to combat antibiotic-resistant pathogens. Gene delivery and gene expression modulation Gut therapeutics can function by delivering gene circuits to bacteria that are already present in the gut, which can enable precise editing and modification of the gut microbiome. For example, gut bacteria have been engineered to deliver CRISPR-based tools into recipient pathogenic cells to reduce host drug resistance or deactivate virulence genes , (Fig. ). This strategy could be used to create novel antibiotics, as it can eliminate pathogenic bacteria or decrease their pathogenic effects. Alternatively, phages can be used to modulate bacterial gene expression in the gut. Non-lytic, temperate phages can deliver catalytically inactive (‘dead’) Cas9 (dCas9) and CRISPR RNAs in situ, which alters gene expression of infected bacteria. This strategy could enable the development of phage therapy to modulate pathogen gene expression by, for example, suppressing the expression of virulence factors . Engineered bacteria can also control gene expression in mammalian cells. For example, commensal bacteria were engineered to modify mammalian cells that overexpress cyclooxygenase 2 (COX2), which is characteristic of inflammatory diseases such as Crohn’s disease and ulcerative colitis . These bacteria invade cells in the colon mucosa and transfer plasmids expressing short interfering RNAs that downregulate expression of COX2. This strategy has been demonstrated to attenuate the inflammatory responses in mouse models , but the general approach still faces challenges to clinical translation; primarily, there is currently little control over specific entry into mammalian cells, which could cause detrimental off-target effects if bacteria were to centre healthy cells. Additionally, it is difficult to control the rate of circuit delivery into recipient cells, which could lead to non-uniform levels of gene knockdown. Bacteria for tumour targeting Bacteria have long been explored as potential cancer treatments: in the 1800s, an injection of streptococcal bacteria shrank a malignant tumour , and in the 1970s bacillus Calmette–Guérin, an attenuated strain of Mycobacterium bovis , was approved to treat bladder cancer. More recently, Salmonella typhimurium has gained attention because it preferentially colonizes necrotic and hypoxic tumour microenvironments. The oxygen-deprived, immune-privileged environment is conducive to anaerobic bacterial growth, which subsequently induces host immune responses to target the bacteria and tumours in a cancer antigen-independent fashion . In the past, S. typhimurium has been involved in numerous phase I trials to treat cancers such as melanoma. However, the treatments were ineffective in humans, and failures were attributed to poor tumour targeting and dose-related toxicity . More recently, treatments utilizing S. typhimurium in combination with chemotherapy drugs have been investigated, and one that targets pancreatic cancer has advanced to a phase II clinical study (NCT04589234). Additional genetically tractable obligate and facultative anaerobes, including Bifidobacterium , Escherichia and Clostridium , were genetically modified to increase tumour specificity by, for example, expressing tumour-targeting peptides or antibodies on the cell surface . Bacteria can deliver various anticancer effectors upon sensing a diseased state. In general, these therapies function by placing the gene encoding an effector molecule under the control of a promoter that responds to a tumour-specific signal (Fig. ). S. typhimurium was engineered to produce a cytolysin protein HlyE upon sensing hypoxia, which resulted in reduced tumour volume when tested in vivo . Similarly, E. coli strains have been engineered to produce antitumour proteins upon sensing a specific cell density, low oxygen levels or decreasing glucose gradients . These sensors have been coupled to additional effector molecules such as prodrug-cleaving enzymes or short interfering RNAs that suppress tumour growth . A phase I clinical trial (NCT01562626) is currently testing whether Bifidobacterium longum that expresses the prodrug-converting enzyme cytosine deaminase enhances the efficacy of flucytosine-based treatment of solid tumours; the cytosine deaminase is expected to convert flucytosine into the standard chemotherapy drug 5-fluorouracil at the site of the tumour. Although these therapeutic systems are relatively straightforward, tuning activity is an ongoing challenge, as it is critical that they respond to the appropriate signal threshold and generate appropriate levels of effector molecules. Dynamic delivery of anticancer drugs To effectively and safely treat cancer, bacteria must be able to deliver the anticancer payload in a controlled fashion and to autoregulate their replication rates. Inducible autolysis has been explored as a strategy to both release a drug and maintain a stable bacterial population , . This approach harnesses quorum-sensing systems. When the concentration of acyl-homoserine lactone (AHL), a quorum-sensing molecule, is low, the cells divide and produce an anticancer drug. As the bacterial density increases, the concentration of AHL reaches a threshold that activates autolysis, releasing the anticancer protein into the tumour microenvironment (Fig. ). Mice injected with these engineered cells showed significant reduction of tumour volume compared with effector alone and cell-only controls . In conclusion, the preliminary bacteria-based anticancer treatments discussed here hold promise to specifically target and kill cancerous cells. Although therapies that use more advanced genetic circuits are still in preclinical development, many bacteria-based cancer therapies have advanced through phase I clinical trials (Table ). Limitations to bacterial cancer therapies Bacteria-based treatments that yield effective results in humans, especially strains with complex, engineered gene networks, remain limited. Balancing the fitness of the bacteria, maintaining stability of the introduced gene circuit, attenuating virulence and increasing target specificity in vivo remain grand challenges to developing bacteria-based cancer therapies. Furthermore, bacterial treatment of cancers (such as leukaemia) that do not form solid tumours conducive to bacterial colonization would likely be ineffective and dangerous, as such treatments would require high concentrations of bacteria in the bloodstream. Treatments in these cases would likely rely on employing engineered mammalian cells, such as those discussed in subsequent sections. Finally, in some cases it is known that tumours contain their own natural microbiome that influences cancer progression. These tumour-specific microbial communities are highly variable between patients , , and their potentially different effects on therapeutic performance must be taken into consideration during strain selection and engineering . Bacteria have long been explored as potential cancer treatments: in the 1800s, an injection of streptococcal bacteria shrank a malignant tumour , and in the 1970s bacillus Calmette–Guérin, an attenuated strain of Mycobacterium bovis , was approved to treat bladder cancer. More recently, Salmonella typhimurium has gained attention because it preferentially colonizes necrotic and hypoxic tumour microenvironments. The oxygen-deprived, immune-privileged environment is conducive to anaerobic bacterial growth, which subsequently induces host immune responses to target the bacteria and tumours in a cancer antigen-independent fashion . In the past, S. typhimurium has been involved in numerous phase I trials to treat cancers such as melanoma. However, the treatments were ineffective in humans, and failures were attributed to poor tumour targeting and dose-related toxicity . More recently, treatments utilizing S. typhimurium in combination with chemotherapy drugs have been investigated, and one that targets pancreatic cancer has advanced to a phase II clinical study (NCT04589234). Additional genetically tractable obligate and facultative anaerobes, including Bifidobacterium , Escherichia and Clostridium , were genetically modified to increase tumour specificity by, for example, expressing tumour-targeting peptides or antibodies on the cell surface . Bacteria can deliver various anticancer effectors upon sensing a diseased state. In general, these therapies function by placing the gene encoding an effector molecule under the control of a promoter that responds to a tumour-specific signal (Fig. ). S. typhimurium was engineered to produce a cytolysin protein HlyE upon sensing hypoxia, which resulted in reduced tumour volume when tested in vivo . Similarly, E. coli strains have been engineered to produce antitumour proteins upon sensing a specific cell density, low oxygen levels or decreasing glucose gradients . These sensors have been coupled to additional effector molecules such as prodrug-cleaving enzymes or short interfering RNAs that suppress tumour growth . A phase I clinical trial (NCT01562626) is currently testing whether Bifidobacterium longum that expresses the prodrug-converting enzyme cytosine deaminase enhances the efficacy of flucytosine-based treatment of solid tumours; the cytosine deaminase is expected to convert flucytosine into the standard chemotherapy drug 5-fluorouracil at the site of the tumour. Although these therapeutic systems are relatively straightforward, tuning activity is an ongoing challenge, as it is critical that they respond to the appropriate signal threshold and generate appropriate levels of effector molecules. To effectively and safely treat cancer, bacteria must be able to deliver the anticancer payload in a controlled fashion and to autoregulate their replication rates. Inducible autolysis has been explored as a strategy to both release a drug and maintain a stable bacterial population , . This approach harnesses quorum-sensing systems. When the concentration of acyl-homoserine lactone (AHL), a quorum-sensing molecule, is low, the cells divide and produce an anticancer drug. As the bacterial density increases, the concentration of AHL reaches a threshold that activates autolysis, releasing the anticancer protein into the tumour microenvironment (Fig. ). Mice injected with these engineered cells showed significant reduction of tumour volume compared with effector alone and cell-only controls . In conclusion, the preliminary bacteria-based anticancer treatments discussed here hold promise to specifically target and kill cancerous cells. Although therapies that use more advanced genetic circuits are still in preclinical development, many bacteria-based cancer therapies have advanced through phase I clinical trials (Table ). Bacteria-based treatments that yield effective results in humans, especially strains with complex, engineered gene networks, remain limited. Balancing the fitness of the bacteria, maintaining stability of the introduced gene circuit, attenuating virulence and increasing target specificity in vivo remain grand challenges to developing bacteria-based cancer therapies. Furthermore, bacterial treatment of cancers (such as leukaemia) that do not form solid tumours conducive to bacterial colonization would likely be ineffective and dangerous, as such treatments would require high concentrations of bacteria in the bloodstream. Treatments in these cases would likely rely on employing engineered mammalian cells, such as those discussed in subsequent sections. Finally, in some cases it is known that tumours contain their own natural microbiome that influences cancer progression. These tumour-specific microbial communities are highly variable between patients , , and their potentially different effects on therapeutic performance must be taken into consideration during strain selection and engineering . Engineered microorganisms can modulate the gut microbiome by sensing biomarker levels, providing potential treatments for gut , inflammation and metabolic diseases . Most current therapies require ingestion of engineered bacteria, but efforts are being made to modify microorganisms in vivo – , which could expand the scope of therapeutic applications. Gut modulation with engineered bacteria The gut is a prime target for bacterial therapeutics because bacteria naturally colonize the gut and because the gut microbiome plays an important role in modulating diseases such as obesity, diabetes, inflammatory diseases and cancer . E. coli Nissle 1917 is a popular for therapeutic engineering because it is non-pathogenic and easy to engineer, and has a naturally positive effect on the gut microbiome. Other strains, such as Lactobacillus , Clostridium and Bacteroides , have also shown promise in therapeutic development . Metabolic diseases are a prime target for dynamic modulation, as bacteria can be readily engineered to process the accumulated metabolite. However, these efforts have had mixed results. Hyperammonaemia is a disease characterized by excess ammonia accumulation in the blood, resulting from defective enzymes in the urea cycle. An E. coli Nissle strain was engineered to assimilate ammonia and sequester the nitrogen into the amino acid l -arginine . Administration of this engineered bacteria to mice with hyperammonaemia reduced blood ammonia levels and improved survival. It completed phase I clinical trials (NCT03179878), but was terminated owing to ineffectiveness in lowering blood ammonia in humans. A similar strategy was used to address phenylketonuria, a genetic disease caused by an inability to metabolize l -phenylalanine ( l -Phe) (Fig. ). E. coli Nissle engineered to convert l -Phe into other metabolites resulted in increased l -Phe metabolism in monkeys , a strategy that recently passed phase I clinical trials (NCT03516487) and is on track for testing in phase II trials. Engineered bacteria could also be used to control the composition of the gut microbiome and eliminate pathogenic bacteria. Commensal E. coli Nissle were engineered to target Pseudomonas aeruginosa , a bacterium that can cause serious infection . The E. coli cells contain a genetic circuit encoding antimicrobial peptides and a -degrading enzyme. Upon detecting the P. aeruginosa quorum-sensing compound, the engineered cells produce the peptide and enzyme (Fig. ). Co-culture of the two strains reduces P. aeruginosa viability and biofilm content. In a mouse infection model, administering the engineered E. coli led to ~70% reduction of P. aeruginosa colonization, providing a viable antimicrobial strategy to combat antibiotic-resistant pathogens. Gene delivery and gene expression modulation Gut therapeutics can function by delivering gene circuits to bacteria that are already present in the gut, which can enable precise editing and modification of the gut microbiome. For example, gut bacteria have been engineered to deliver CRISPR-based tools into recipient pathogenic cells to reduce host drug resistance or deactivate virulence genes , (Fig. ). This strategy could be used to create novel antibiotics, as it can eliminate pathogenic bacteria or decrease their pathogenic effects. Alternatively, phages can be used to modulate bacterial gene expression in the gut. Non-lytic, temperate phages can deliver catalytically inactive (‘dead’) Cas9 (dCas9) and CRISPR RNAs in situ, which alters gene expression of infected bacteria. This strategy could enable the development of phage therapy to modulate pathogen gene expression by, for example, suppressing the expression of virulence factors . Engineered bacteria can also control gene expression in mammalian cells. For example, commensal bacteria were engineered to modify mammalian cells that overexpress cyclooxygenase 2 (COX2), which is characteristic of inflammatory diseases such as Crohn’s disease and ulcerative colitis . These bacteria invade cells in the colon mucosa and transfer plasmids expressing short interfering RNAs that downregulate expression of COX2. This strategy has been demonstrated to attenuate the inflammatory responses in mouse models , but the general approach still faces challenges to clinical translation; primarily, there is currently little control over specific entry into mammalian cells, which could cause detrimental off-target effects if bacteria were to centre healthy cells. Additionally, it is difficult to control the rate of circuit delivery into recipient cells, which could lead to non-uniform levels of gene knockdown. The gut is a prime target for bacterial therapeutics because bacteria naturally colonize the gut and because the gut microbiome plays an important role in modulating diseases such as obesity, diabetes, inflammatory diseases and cancer . E. coli Nissle 1917 is a popular for therapeutic engineering because it is non-pathogenic and easy to engineer, and has a naturally positive effect on the gut microbiome. Other strains, such as Lactobacillus , Clostridium and Bacteroides , have also shown promise in therapeutic development . Metabolic diseases are a prime target for dynamic modulation, as bacteria can be readily engineered to process the accumulated metabolite. However, these efforts have had mixed results. Hyperammonaemia is a disease characterized by excess ammonia accumulation in the blood, resulting from defective enzymes in the urea cycle. An E. coli Nissle strain was engineered to assimilate ammonia and sequester the nitrogen into the amino acid l -arginine . Administration of this engineered bacteria to mice with hyperammonaemia reduced blood ammonia levels and improved survival. It completed phase I clinical trials (NCT03179878), but was terminated owing to ineffectiveness in lowering blood ammonia in humans. A similar strategy was used to address phenylketonuria, a genetic disease caused by an inability to metabolize l -phenylalanine ( l -Phe) (Fig. ). E. coli Nissle engineered to convert l -Phe into other metabolites resulted in increased l -Phe metabolism in monkeys , a strategy that recently passed phase I clinical trials (NCT03516487) and is on track for testing in phase II trials. Engineered bacteria could also be used to control the composition of the gut microbiome and eliminate pathogenic bacteria. Commensal E. coli Nissle were engineered to target Pseudomonas aeruginosa , a bacterium that can cause serious infection . The E. coli cells contain a genetic circuit encoding antimicrobial peptides and a -degrading enzyme. Upon detecting the P. aeruginosa quorum-sensing compound, the engineered cells produce the peptide and enzyme (Fig. ). Co-culture of the two strains reduces P. aeruginosa viability and biofilm content. In a mouse infection model, administering the engineered E. coli led to ~70% reduction of P. aeruginosa colonization, providing a viable antimicrobial strategy to combat antibiotic-resistant pathogens. Gut therapeutics can function by delivering gene circuits to bacteria that are already present in the gut, which can enable precise editing and modification of the gut microbiome. For example, gut bacteria have been engineered to deliver CRISPR-based tools into recipient pathogenic cells to reduce host drug resistance or deactivate virulence genes , (Fig. ). This strategy could be used to create novel antibiotics, as it can eliminate pathogenic bacteria or decrease their pathogenic effects. Alternatively, phages can be used to modulate bacterial gene expression in the gut. Non-lytic, temperate phages can deliver catalytically inactive (‘dead’) Cas9 (dCas9) and CRISPR RNAs in situ, which alters gene expression of infected bacteria. This strategy could enable the development of phage therapy to modulate pathogen gene expression by, for example, suppressing the expression of virulence factors . Engineered bacteria can also control gene expression in mammalian cells. For example, commensal bacteria were engineered to modify mammalian cells that overexpress cyclooxygenase 2 (COX2), which is characteristic of inflammatory diseases such as Crohn’s disease and ulcerative colitis . These bacteria invade cells in the colon mucosa and transfer plasmids expressing short interfering RNAs that downregulate expression of COX2. This strategy has been demonstrated to attenuate the inflammatory responses in mouse models , but the general approach still faces challenges to clinical translation; primarily, there is currently little control over specific entry into mammalian cells, which could cause detrimental off-target effects if bacteria were to centre healthy cells. Additionally, it is difficult to control the rate of circuit delivery into recipient cells, which could lead to non-uniform levels of gene knockdown. Using engineered bacteria in a safe and contained way is a top priority in therapeutic development and is required to obtain regulatory approval. For example, in both the European Union and the USA, regulatory agencies require extensive demonstration of the bacteria’s safety, genome stability, colonization time and ability to be removed , , . To effectively engineer bacteria to meet these criteria, it is critical to attenuate pathogenicity, control bacterial survival and replication, and minimize the risk of mutation. Engineering safe and containable strains Various engineering strategies can be used to make bacteria safe and to ensure that they do not survive outside their intended environments. To ensure safety, virulence genes can be readily removed via standard gene editing approaches . Further, ‘suicide genes’ can be incorporated so that engineered bacteria can be selectively removed from the population. A quorum-sensing system that prompts self-destruction upon reaching a certain density threshold is one example of an effective suicide gene , but other strategies can offer more external control. For instance, the use of auxotrophic bacteria allows growth only when an exogenous nutrient (for example, an unnatural amino acid) is supplied , enabling easy removal of engineered bacteria through withholding of the amino acid. Maintenance of genetic stability Another safety concern for engineered bacteria is ensuring that they do not mutate over time. This can happen through mutations in the sensor-encoding or effector-encoding genetic circuit, which can reduce treatment effectiveness or cause unwanted adverse effects. Circuits with minimal burden on engineered cells have been shown to be genetically stable in the gut environment , and engineering approaches can further stabilize systems. For instance, synthetic communities composed of multiple bacterial strains engineered to sense and replace a mutating subpopulation have increased circuit stability . Bacterial cells are also subject to horizontal gene transfer, whereby genetic material from other cells or viruses centre and can alter cell function in an unpredictable fashion . To prevent horizontal gene transfer, bacteria can be genetically re-coded to impair expression of viral proteins or replication of foreign plasmids, minimizing the risk of major mutations . Various engineering strategies can be used to make bacteria safe and to ensure that they do not survive outside their intended environments. To ensure safety, virulence genes can be readily removed via standard gene editing approaches . Further, ‘suicide genes’ can be incorporated so that engineered bacteria can be selectively removed from the population. A quorum-sensing system that prompts self-destruction upon reaching a certain density threshold is one example of an effective suicide gene , but other strategies can offer more external control. For instance, the use of auxotrophic bacteria allows growth only when an exogenous nutrient (for example, an unnatural amino acid) is supplied , enabling easy removal of engineered bacteria through withholding of the amino acid. Another safety concern for engineered bacteria is ensuring that they do not mutate over time. This can happen through mutations in the sensor-encoding or effector-encoding genetic circuit, which can reduce treatment effectiveness or cause unwanted adverse effects. Circuits with minimal burden on engineered cells have been shown to be genetically stable in the gut environment , and engineering approaches can further stabilize systems. For instance, synthetic communities composed of multiple bacterial strains engineered to sense and replace a mutating subpopulation have increased circuit stability . Bacterial cells are also subject to horizontal gene transfer, whereby genetic material from other cells or viruses centre and can alter cell function in an unpredictable fashion . To prevent horizontal gene transfer, bacteria can be genetically re-coded to impair expression of viral proteins or replication of foreign plasmids, minimizing the risk of major mutations . Compared with bacterial engineering, mammalian synthetic biology faces the added complexity of eukaryotic cell biology and associated gene regulation , but recent technological advances have improved our ability to control eukaryotic cell output at the transcriptional, translational and post-translational level – . Although bacterial cells have been the primary chassis for whole-cell diagnostics, a handful of mammalian cell diagnostics have been developed , . These show potential for diagnosing conditions with biomarkers that mammalian cells can recognize more easily than bacterial cells, such as inflammatory molecules produced by the immune system. Ex vivo mammalian diagnostics One prominent example of an ex vivo mammalian diagnostic is a whole-cell sensor for personalized, precise profiling of allergies . Allergen profiling is normally done with intrusive skin pricks that expose patients to allergens and induce immune reactions in the skin. As an alternative, HEK293 cells were engineered to robustly respond to histamine, a compound secreted by immune cells that indicates an allergic reaction . When a blood sample taken from a patient is exposed to an allergen, immune effector cells secrete histamine as usual, and the engineered sensor cells can detect and score the amount of histamine produced. These cells could be the basis of a high-throughput assay for allergic responses, which could replace the traditional skin prick test. In vivo mammalian diagnostics In vivo mammalian cell diagnostics are less common, primarily because most mammalian cells engineered to respond to diseases in vivo also serve as therapeutics, which we discuss in detail in the next section. However, one example of a purely diagnostic mammalian cell system is a sensor for hypercalcaemia . High levels of calcium are a result of hormone-mediated dysregulation of bone resorption and are associated with asymptomatic cancers . HEK293 cells were engineered to serve as sentinel cells for cancer by continuously monitoring calcium levels. When calcium in the blood surpasses a target threshold, the engineered cells produce melanin, a pigment that is visible through the skin. When encapsulated in and injected under the skin of mice, these aptly named HEK Tattoo cells function well as calcium reporters. However, they have not been tested in humans, presumably because of the immunogenicity associated with cell implantation if the cells were to leak out of the alginate capsules. One prominent example of an ex vivo mammalian diagnostic is a whole-cell sensor for personalized, precise profiling of allergies . Allergen profiling is normally done with intrusive skin pricks that expose patients to allergens and induce immune reactions in the skin. As an alternative, HEK293 cells were engineered to robustly respond to histamine, a compound secreted by immune cells that indicates an allergic reaction . When a blood sample taken from a patient is exposed to an allergen, immune effector cells secrete histamine as usual, and the engineered sensor cells can detect and score the amount of histamine produced. These cells could be the basis of a high-throughput assay for allergic responses, which could replace the traditional skin prick test. In vivo mammalian cell diagnostics are less common, primarily because most mammalian cells engineered to respond to diseases in vivo also serve as therapeutics, which we discuss in detail in the next section. However, one example of a purely diagnostic mammalian cell system is a sensor for hypercalcaemia . High levels of calcium are a result of hormone-mediated dysregulation of bone resorption and are associated with asymptomatic cancers . HEK293 cells were engineered to serve as sentinel cells for cancer by continuously monitoring calcium levels. When calcium in the blood surpasses a target threshold, the engineered cells produce melanin, a pigment that is visible through the skin. When encapsulated in and injected under the skin of mice, these aptly named HEK Tattoo cells function well as calcium reporters. However, they have not been tested in humans, presumably because of the immunogenicity associated with cell implantation if the cells were to leak out of the alginate capsules. The primary focus in the field of mammalian synthetic biology has been the creation of theranostic cells that can simultaneously recognize a diseased state and respond to it in vivo . These systems harness the innate ability of mammalian cells to respond to a wide variety of stimuli, and thus show promise for real-time regulation of complex diseased states. In this section, we explore recent progress in the field of mammalian theranostic cell therapies. We first discuss engineered T cells as prime examples of theranostic cell engineering, focusing mainly on CAR T cells. We describe new protein engineering approaches to build CAR cells with improved specificity and safety profiles. Finally, we briefly describe ways that other types of theranostic cell are being engineered for diverse applications. Synthetic TCR T cell therapeutics (TCR)-modified T cells were first developed as a strategy to harness the potent therapeutic effects of cytotoxic T cells for anticancer therapies. TCRs recognize displayed on major histocompatibility complex (MHC) proteins, and they can be engineered to target specific, researcher-defined antigen peptides . These antigen peptides can originate from membrane-bound or intracellular proteins, giving researchers a wide range of potential targets. However, TCRs can only recognize certain peptide–MHC complexes, and MHC genes are highly polymorphic in the general population, which limits these therapies to patients expressing a given MHC haplotype. Nevertheless, clinical trials of TCR-engineered T cells have been successful in treating myeloma and melanoma , , and many others are currently underway . CAR T cells for cancer treatment Similar to TCR cell therapies, CAR T cells were engineered to sense cancer biomarkers and elicit a downstream cytotoxic response (Fig. ). However, unlike TCRs, CARs (previously referred to as T bodies and immunoreceptors) are artificial receptors that combine the antigen-binding specificity of an antibody and the T cell-activating signalling domains of the TCR without MHC restriction , . The extracellular domain of a CAR is a (scFv), which confers antigen-binding specificity, and the intracellular domain contains elements that activate T cell signalling . Upon extracellular target recognition, the intracellular domain activates the T cell response, which produces co-stimulatory signals necessary for T cell function, proliferation and persistence , leading to killing of cells that have the targeted receptor. Multiple generations of CARs have been created to stimulate the optimal combination of intracellular signalling, T cell activation and T cell persistence (Fig. ). Currently approved CAR T cell therapies are cell therapies. T cells harvested from the patient are first expanded and engineered ex vivo with a viral vector encoding the CAR protein for long-term expression; the engineered T cells are then infused back into the patient, where they home to tumours expressing an antigen of interest . Patients with previously non-responsive B cell cancers have experienced complete remission upon treatment with CAR T cells, and they are currently being developed for the treatment of many other cancer types, including solid tumours . Despite the clinical success of CAR T cells in treating haematological cancers, there are still major obstacles to safe and efficacious CAR T cell treatment of diverse cancer types. In both haematological and solid tumours, there is a dearth of tumour-specific antigens, and on-target off-tumour killing of healthy cells can occur . Tumour cells can also downregulate expression of the antigen targeted by CAR T cells, a process known as antigen escape, allowing the tumour to grow again unchecked by the immune response . Additionally, many patients experience major adverse effects during treatment, such as neurotoxicity and , which results from constitutive CAR activation . Ongoing efforts to modify CARs aim to overcome the above challenges, and thus improve treatment safety and efficacy. Specifically, research has demonstrated ways that CARs can be designed to enhance tumour specificity and to control the spatio-temporal profile of inflammatory . Novel modifications to CARs should both improve treatment safety and maximize the on-target immune response. Improving CAR T cell safety Unlike traditional small-molecule drugs where the dose is controlled during administration, the activity and proliferation of CAR T cells is largely uncontrollable once the therapy is administered. Thus, much research has focused on engineering control systems that allow in vivo CAR T cell modulation to improve the safety of the therapy. Many of these systems use small molecules to modulate T cell function , , . One approach is the development of cells engineered to have inducible suicide function so that they self-destruct upon addition of a small-molecule regulator , . Two suicide genes that have been effectively used are an inducible caspase 9 (iCasp9) , which initiates downstream apoptotic pathways once activated by a dimerizing small molecule, and herpes simplex virus thymidine kinase (HSV-TK) , which inhibits DNA synthesis when activated with the small molecule ganciclovir (Fig. ). A second approach to improving safety is expression of co-receptors that inhibit CAR T cell action against healthy cells . Antigen-specific inhibitory CARs contain an scFv directed to antigens expressed on healthy cells fused to the signalling domains of T cell inhibitory receptors, CTLA4 and PD1 (Fig. ). When bound to antigens indicative of healthy cells, CAR T cell action is inhibited. However, this approach is limited, as it is challenging to find cell surface markers that are unique to healthy cells. A third approach to controlling function is to modulate levels of functional CAR proteins at the cell surface , . A prime example of CAR surface expression control is the development of CAR T cells that can be reversibly paused after administration of a small-molecule ligand . These T cells express second-generation CARs fused to a ligand-induced degradation (LID) domain (Fig. ). Binding of the ligand to the LID domain induces the release of a cryptic degron, which results in selective CAR degradation; however, the T cells themselves still remain, unlike inducible suicide gene systems. Thus, the CAR T cells can resume activity when the ligand is removed, enabling precise and reversible control of CAR T cell function in a ligand concentration-dependent manner. Improving CAR T cell efficacy The ideal CAR T cell will only be active upon recognition of cancer-specific antigens, but the lack of tumour-specific antigens and antigen-independent activation of CARs (termed tonic signalling) can lead to unwanted CAR T cell activation. Additionally, tumours can lose expression of antigens targeted by CARs , rendering the treatment useless. One approach for overcoming antigen specificity challenges is the use of cells that can recognize multiple antigens simultaneously, for example, using SynNotch-gated CARs . These comprise AND-gated , which means they are only activated after binding two tumour antigens (Fig. ). Binding of one scFv to its targeted tumour antigen triggers the translocation of a synthetic transcription factor to the nucleus, which causes expression of a CAR directed to a second tumour antigen. Affinity or avidity tuning of chimeric proteins is another form of AND-gating in both receptor and ligand design that can diminish the targeting of healthy antigen-presenting cells . CARs dimerize to effect their signalling in the cell; however, tonic signalling, whereby CARs dimerize without the presence of antigen, is an issue that currently approved CARs face . One example of avidity tuning is the AvidCAR T cell platform , which prevents unwanted dimerization and cell activation. This system employs monomeric CARs with low-affinity, single-domain antigen-binding domains (instead of an scFv) that rely on bivalent antigen engagement for dimerization and activation (Fig. ). Reduced affinity of the single-domain antigen-binding domain prevents constitutive CAR dimerization, and CAR signalling and effector function are only active when antigens are co-expressed on the same cell. This platform is thus an easily controllable and combinatorial system that better targets tumour cells co-expressing antigens rather than healthy surrounding tissue. Aside from AND gates, other logic gates, such as OR and NOT gates, have been engineered to recognize different combinations of surface antigens to increase tumour targeting over the targeting of healthy cells , , . Finally, an alternative approach to combat antigen escape is the development of universal CAR T cells, which can be altered to detect different antigens without having to entirely re-engineer the CAR – . These systems split the antigen-recognition domain and the co-stimulatory domains of conventional CARs into separate components. The first component is an engineered T cell expressing a universal CAR construct consisting of intracellular signalling domains and an extracellular adapter (instead of an scFv directed to the antigen of interest). The second component is a complementary adapter molecule that confers antigen-binding specificity and the modularity of the platform. One example of such an approach is the split, universal and programmable CAR (SUPRA CAR) platform , which concurrently addresses specificity, safety and ease of design (Fig. ). The SUPRA CAR system consists of a T cell that only expresses the T cell signalling domains of a CAR linked to an extracellular leucine zipper domain (zipCAR) and an adapter molecule, which is an scFv linked to a complementary leucine zipper (zipFv). The zipFv confers the antigen specificity of the CAR T cell, and different scFv leucine zippers can be easily injected into a patient without reinfusing cells. This modularity allows combinatorial logic and inhibition of CAR function depending on the zipCARs and zipFvs used. CAR T cells for solid tumours Although this review focuses on CAR T cells engineered to treat haematological cancers, CAR T cells are also being explored to treat solid tumours . These approaches have not been as successful in generating remission , and there are no currently approved CAR T cells that target solid tumours. Beyond the challenges discussed above, these therapies must overcome the immunosuppressive tumour microenvironment and impaired trafficking of CAR T cells into the tumour mass . Examples of CAR T cell modifications to circumvent these challenges are the expression of inflammatory cytokines, such as IL-12, that improve CAR persistence and activity and the co-expression of chemokine receptors, such as CCR4 (ref. ). Additionally, CARs can be used in combination with oncolytic viruses, which can cause direct tumour cell lysis or can generate an inflammatory immune response , , but these systems are beyond the scope of this Review. Other immunomodulatory CAR cells The majority of recent T cell engineering has focused on developing novel cancer therapeutics, yet CARs also have the potential to treat autoimmune disorders or to regulate ageing cells. By targeting cells that secrete inflammatory cytokines, CARs can dampen an overactive immune response. Recently, ‘senolytic’ CAR T cells were used to recognize and eliminate senescent cells . Eliminating senescent cells reduces inflammation and tissue damage and increases the healthspan – , suggesting that senolytic T cells could be a potential anti-ageing therapeutic. Additionally, introducing CARs into immune cells beyond T cells leverages the diversity of effector functions found across immune cell types (Box ). The resulting therapies could overcome some of the current limitations of CAR T cells and also broaden the scope of CAR cell applications to other immune disorders. Box 1 Other CAR immune cells and their effector functions Natural killer cells Similar to cytotoxic T cells, natural killer cells, which are innate immune cells that do not require prior activation, also induce receptor-mediated cell death . Unlike chimeric antigen receptor (CAR) T cells, which need to be patient-derived to avoid immune rejection, natural killer cells derived from unrelated, HLA-mismatched donors were well tolerated in phase I and II clinical trials , . Although the mechanism for this is still unclear, cytotoxicity of donor natural killer cells against a patient’s alloreactive T cells seems to mediate this tolerance , . By redirecting natural killer cell activity with antibodies or single-chain variable fragment (scFv) domains against CD19, the tumour-associated antigen mesothelin and HIV gp160, CAR-engineered natural killer cells have been shown to be effective for targeting lymphoma , solid ovarian tumours and HIV-infected cells , respectively (see the figure, part a ). Macrophages Macrophages are phagocytic cells that can engulf and neutralize pathogens. They can infiltrate the solid tumour microenvironment and bridge the gap between the innate and adaptive immune systems by presenting engulfed antigens to B cells and T cells of the adaptive immune system. Macrophages use cytokines to activate or inhibit cells of the adaptive immune system, including T cells, making them a potent target for CAR engineering. Recently, engineered CAR macrophages were shown to not only infiltrate tumours but also overcome the immunosuppressive tumour microenvironment and maintain a pro-inflammatory M1 phenotype (see the figure, part b ). The authors also showed that adenoviral transduction of the CAR into macrophages was sufficient for M1 polarization, and that this polarization induced pro-inflammatory gene expression in the tumour microenvironment, including converting bystander M2 macrophages into M1 macrophages. This proof-of-concept study demonstrates that CAR macrophages could become a promising new therapeutic for solid tumours, and highlights the potential of antigen-presenting cell engineering in altering the inflammatory state of the microenvironment. Regulatory T cells Regulatory T cells (T reg cells) are important mediators of immune tolerance, and can dampen the immune response through interactions with cells of the innate and adaptive immune systems , . T reg cells suppress the inflammatory effector functions of cognate immune cells via localized anti-inflammatory cytokine secretion , although they can also perform perforin-mediated killing like their CD8 + counterparts . T reg cells have been engineered with both T cell receptors (TCRs) and CARs to direct their immunosuppressive functions, although CAR-mediated activation has been reported as stronger than TCR-based activation of T reg cells in inducing proliferation . Whereas polyclonal T reg cell therapy is currently being tested in clinical trials for autoimmune diseases, these studies suggest that the next generation of targeted CAR T reg cells could have even greater potency in future clinical trials. Theranostic cells for other applications Whereas recent mammalian theranostics have centred on therapeutic T cells, mammalian theranostic cells have also been developed for an array of other diseases , , , . Although these have not yet been developed for clinical use, they have shown great potential in preclinical studies, particularly for the treatment of autoimmune diseases. Cells to modulate the immune system To engineer cells with novel functions, synthetic biologists can piece together genetic elements from diverse cell types. A prime example of such engineering is the development of HEK293 cells for the treatment of psoriasis. These cells express anti-inflammatory cytokines upon recognition of psoriasis-specific inflammatory cytokines (Fig. ). The cytokines TNF and IL-22 are characteristic of psoriasis, but HEK293 cells only endogenously express one-half of the IL-22 receptor (IL-10RB). To endow HEK293 cells with the ability to recognize IL-22, the endogenous TNF-responsive pathway was engineered to control production of the other half of the IL-22 receptor (IL-22RA) . Then, binding of IL-22 to the expressed receptor triggers the IL-22 signalling cascade. This pathway was rewired to control production of the two anti-inflammatory cytokines IL-4 and IL-10 (Fig. ). The resulting cells successfully reduce inflammation upon sensing the psoriatic phenotype, and in mouse models they prevent onset of psoriatic flares and attenuate acute psoriasis . Thus, the researchers rewired an endogenous pathway (TNF signalling) to express a component (IL-22 receptor) from a different cell type to generate novel responses in the theranostic cell. Using similar engineering tactics, other theranostic cells have been developed to treat diverse diseased states such as diabetes , , methicillin-resistant Staphylococcus aureus allergy and inflammation , . Engineered stem cells for regenerative medicine Genetically engineered mesenchymal stem cells are powerful tools for tissue regeneration and gene therapy, and using synthetic biology approaches to control their activity could expand their regenerative applications. For example, engineering cells to overexpress IL-1 receptor antagonist (IL-1RA) can dampen an overactive, inflammatory immune response, and simultaneously expressing vascular endothelial growth factor (VEGF) promotes angiogenesis, both critical components of tissue regeneration . Whereas delivery of VEGF-encoding DNA produces angiogenic effects – , stimulation of the native VEGF promoter triggers more robust angiogenesis . This is thought to be due to post-transcriptional processing leading to multiple splice variants of VEGF, which provide a more comprehensive set of angiogenic stimuli. In line with these results, transgenic expression of survivin, an enhancer of VEGF production, accelerates myocardial healing post infarction . Because mammalian cells have complex post-transcriptional processing that is difficult to control (that is, alternative RNA splicing) , stimulating native cytokine expression may not be sufficient to achieve the desired phenotype; alternatively, exogenous expression of cytokines can be more rationally engineered and more tightly controlled (Fig. ). Ectopic transcription factor expression (expression of transcription factors not normally present in a cell type) can also modify cell behaviour, most potently demonstrated by the use of Yamanaka factors for generating induced pluripotent stem cells . Since then, expression of other transcription factors has expanded the scope of mesenchymal stem cell therapies. For example, overexpression of HIF1α enhances haematopoietic growth factor production, which could increase the success of bone marrow mesenchymal stem cell transplants . As engineered receptors, transcription factors and cytokines are connected and regulated in more complex pathways, mammalian synthetic biologists will be able to better control cell function and create new types of therapeutic cell with more potent regenerative capabilities. (TCR)-modified T cells were first developed as a strategy to harness the potent therapeutic effects of cytotoxic T cells for anticancer therapies. TCRs recognize displayed on major histocompatibility complex (MHC) proteins, and they can be engineered to target specific, researcher-defined antigen peptides . These antigen peptides can originate from membrane-bound or intracellular proteins, giving researchers a wide range of potential targets. However, TCRs can only recognize certain peptide–MHC complexes, and MHC genes are highly polymorphic in the general population, which limits these therapies to patients expressing a given MHC haplotype. Nevertheless, clinical trials of TCR-engineered T cells have been successful in treating myeloma and melanoma , , and many others are currently underway . Similar to TCR cell therapies, CAR T cells were engineered to sense cancer biomarkers and elicit a downstream cytotoxic response (Fig. ). However, unlike TCRs, CARs (previously referred to as T bodies and immunoreceptors) are artificial receptors that combine the antigen-binding specificity of an antibody and the T cell-activating signalling domains of the TCR without MHC restriction , . The extracellular domain of a CAR is a (scFv), which confers antigen-binding specificity, and the intracellular domain contains elements that activate T cell signalling . Upon extracellular target recognition, the intracellular domain activates the T cell response, which produces co-stimulatory signals necessary for T cell function, proliferation and persistence , leading to killing of cells that have the targeted receptor. Multiple generations of CARs have been created to stimulate the optimal combination of intracellular signalling, T cell activation and T cell persistence (Fig. ). Currently approved CAR T cell therapies are cell therapies. T cells harvested from the patient are first expanded and engineered ex vivo with a viral vector encoding the CAR protein for long-term expression; the engineered T cells are then infused back into the patient, where they home to tumours expressing an antigen of interest . Patients with previously non-responsive B cell cancers have experienced complete remission upon treatment with CAR T cells, and they are currently being developed for the treatment of many other cancer types, including solid tumours . Despite the clinical success of CAR T cells in treating haematological cancers, there are still major obstacles to safe and efficacious CAR T cell treatment of diverse cancer types. In both haematological and solid tumours, there is a dearth of tumour-specific antigens, and on-target off-tumour killing of healthy cells can occur . Tumour cells can also downregulate expression of the antigen targeted by CAR T cells, a process known as antigen escape, allowing the tumour to grow again unchecked by the immune response . Additionally, many patients experience major adverse effects during treatment, such as neurotoxicity and , which results from constitutive CAR activation . Ongoing efforts to modify CARs aim to overcome the above challenges, and thus improve treatment safety and efficacy. Specifically, research has demonstrated ways that CARs can be designed to enhance tumour specificity and to control the spatio-temporal profile of inflammatory . Novel modifications to CARs should both improve treatment safety and maximize the on-target immune response. Improving CAR T cell safety Unlike traditional small-molecule drugs where the dose is controlled during administration, the activity and proliferation of CAR T cells is largely uncontrollable once the therapy is administered. Thus, much research has focused on engineering control systems that allow in vivo CAR T cell modulation to improve the safety of the therapy. Many of these systems use small molecules to modulate T cell function , , . One approach is the development of cells engineered to have inducible suicide function so that they self-destruct upon addition of a small-molecule regulator , . Two suicide genes that have been effectively used are an inducible caspase 9 (iCasp9) , which initiates downstream apoptotic pathways once activated by a dimerizing small molecule, and herpes simplex virus thymidine kinase (HSV-TK) , which inhibits DNA synthesis when activated with the small molecule ganciclovir (Fig. ). A second approach to improving safety is expression of co-receptors that inhibit CAR T cell action against healthy cells . Antigen-specific inhibitory CARs contain an scFv directed to antigens expressed on healthy cells fused to the signalling domains of T cell inhibitory receptors, CTLA4 and PD1 (Fig. ). When bound to antigens indicative of healthy cells, CAR T cell action is inhibited. However, this approach is limited, as it is challenging to find cell surface markers that are unique to healthy cells. A third approach to controlling function is to modulate levels of functional CAR proteins at the cell surface , . A prime example of CAR surface expression control is the development of CAR T cells that can be reversibly paused after administration of a small-molecule ligand . These T cells express second-generation CARs fused to a ligand-induced degradation (LID) domain (Fig. ). Binding of the ligand to the LID domain induces the release of a cryptic degron, which results in selective CAR degradation; however, the T cells themselves still remain, unlike inducible suicide gene systems. Thus, the CAR T cells can resume activity when the ligand is removed, enabling precise and reversible control of CAR T cell function in a ligand concentration-dependent manner. Improving CAR T cell efficacy The ideal CAR T cell will only be active upon recognition of cancer-specific antigens, but the lack of tumour-specific antigens and antigen-independent activation of CARs (termed tonic signalling) can lead to unwanted CAR T cell activation. Additionally, tumours can lose expression of antigens targeted by CARs , rendering the treatment useless. One approach for overcoming antigen specificity challenges is the use of cells that can recognize multiple antigens simultaneously, for example, using SynNotch-gated CARs . These comprise AND-gated , which means they are only activated after binding two tumour antigens (Fig. ). Binding of one scFv to its targeted tumour antigen triggers the translocation of a synthetic transcription factor to the nucleus, which causes expression of a CAR directed to a second tumour antigen. Affinity or avidity tuning of chimeric proteins is another form of AND-gating in both receptor and ligand design that can diminish the targeting of healthy antigen-presenting cells . CARs dimerize to effect their signalling in the cell; however, tonic signalling, whereby CARs dimerize without the presence of antigen, is an issue that currently approved CARs face . One example of avidity tuning is the AvidCAR T cell platform , which prevents unwanted dimerization and cell activation. This system employs monomeric CARs with low-affinity, single-domain antigen-binding domains (instead of an scFv) that rely on bivalent antigen engagement for dimerization and activation (Fig. ). Reduced affinity of the single-domain antigen-binding domain prevents constitutive CAR dimerization, and CAR signalling and effector function are only active when antigens are co-expressed on the same cell. This platform is thus an easily controllable and combinatorial system that better targets tumour cells co-expressing antigens rather than healthy surrounding tissue. Aside from AND gates, other logic gates, such as OR and NOT gates, have been engineered to recognize different combinations of surface antigens to increase tumour targeting over the targeting of healthy cells , , . Finally, an alternative approach to combat antigen escape is the development of universal CAR T cells, which can be altered to detect different antigens without having to entirely re-engineer the CAR – . These systems split the antigen-recognition domain and the co-stimulatory domains of conventional CARs into separate components. The first component is an engineered T cell expressing a universal CAR construct consisting of intracellular signalling domains and an extracellular adapter (instead of an scFv directed to the antigen of interest). The second component is a complementary adapter molecule that confers antigen-binding specificity and the modularity of the platform. One example of such an approach is the split, universal and programmable CAR (SUPRA CAR) platform , which concurrently addresses specificity, safety and ease of design (Fig. ). The SUPRA CAR system consists of a T cell that only expresses the T cell signalling domains of a CAR linked to an extracellular leucine zipper domain (zipCAR) and an adapter molecule, which is an scFv linked to a complementary leucine zipper (zipFv). The zipFv confers the antigen specificity of the CAR T cell, and different scFv leucine zippers can be easily injected into a patient without reinfusing cells. This modularity allows combinatorial logic and inhibition of CAR function depending on the zipCARs and zipFvs used. CAR T cells for solid tumours Although this review focuses on CAR T cells engineered to treat haematological cancers, CAR T cells are also being explored to treat solid tumours . These approaches have not been as successful in generating remission , and there are no currently approved CAR T cells that target solid tumours. Beyond the challenges discussed above, these therapies must overcome the immunosuppressive tumour microenvironment and impaired trafficking of CAR T cells into the tumour mass . Examples of CAR T cell modifications to circumvent these challenges are the expression of inflammatory cytokines, such as IL-12, that improve CAR persistence and activity and the co-expression of chemokine receptors, such as CCR4 (ref. ). Additionally, CARs can be used in combination with oncolytic viruses, which can cause direct tumour cell lysis or can generate an inflammatory immune response , , but these systems are beyond the scope of this Review. Other immunomodulatory CAR cells The majority of recent T cell engineering has focused on developing novel cancer therapeutics, yet CARs also have the potential to treat autoimmune disorders or to regulate ageing cells. By targeting cells that secrete inflammatory cytokines, CARs can dampen an overactive immune response. Recently, ‘senolytic’ CAR T cells were used to recognize and eliminate senescent cells . Eliminating senescent cells reduces inflammation and tissue damage and increases the healthspan – , suggesting that senolytic T cells could be a potential anti-ageing therapeutic. Additionally, introducing CARs into immune cells beyond T cells leverages the diversity of effector functions found across immune cell types (Box ). The resulting therapies could overcome some of the current limitations of CAR T cells and also broaden the scope of CAR cell applications to other immune disorders. Box 1 Other CAR immune cells and their effector functions Natural killer cells Similar to cytotoxic T cells, natural killer cells, which are innate immune cells that do not require prior activation, also induce receptor-mediated cell death . Unlike chimeric antigen receptor (CAR) T cells, which need to be patient-derived to avoid immune rejection, natural killer cells derived from unrelated, HLA-mismatched donors were well tolerated in phase I and II clinical trials , . Although the mechanism for this is still unclear, cytotoxicity of donor natural killer cells against a patient’s alloreactive T cells seems to mediate this tolerance , . By redirecting natural killer cell activity with antibodies or single-chain variable fragment (scFv) domains against CD19, the tumour-associated antigen mesothelin and HIV gp160, CAR-engineered natural killer cells have been shown to be effective for targeting lymphoma , solid ovarian tumours and HIV-infected cells , respectively (see the figure, part a ). Macrophages Macrophages are phagocytic cells that can engulf and neutralize pathogens. They can infiltrate the solid tumour microenvironment and bridge the gap between the innate and adaptive immune systems by presenting engulfed antigens to B cells and T cells of the adaptive immune system. Macrophages use cytokines to activate or inhibit cells of the adaptive immune system, including T cells, making them a potent target for CAR engineering. Recently, engineered CAR macrophages were shown to not only infiltrate tumours but also overcome the immunosuppressive tumour microenvironment and maintain a pro-inflammatory M1 phenotype (see the figure, part b ). The authors also showed that adenoviral transduction of the CAR into macrophages was sufficient for M1 polarization, and that this polarization induced pro-inflammatory gene expression in the tumour microenvironment, including converting bystander M2 macrophages into M1 macrophages. This proof-of-concept study demonstrates that CAR macrophages could become a promising new therapeutic for solid tumours, and highlights the potential of antigen-presenting cell engineering in altering the inflammatory state of the microenvironment. Regulatory T cells Regulatory T cells (T reg cells) are important mediators of immune tolerance, and can dampen the immune response through interactions with cells of the innate and adaptive immune systems , . T reg cells suppress the inflammatory effector functions of cognate immune cells via localized anti-inflammatory cytokine secretion , although they can also perform perforin-mediated killing like their CD8 + counterparts . T reg cells have been engineered with both T cell receptors (TCRs) and CARs to direct their immunosuppressive functions, although CAR-mediated activation has been reported as stronger than TCR-based activation of T reg cells in inducing proliferation . Whereas polyclonal T reg cell therapy is currently being tested in clinical trials for autoimmune diseases, these studies suggest that the next generation of targeted CAR T reg cells could have even greater potency in future clinical trials. Unlike traditional small-molecule drugs where the dose is controlled during administration, the activity and proliferation of CAR T cells is largely uncontrollable once the therapy is administered. Thus, much research has focused on engineering control systems that allow in vivo CAR T cell modulation to improve the safety of the therapy. Many of these systems use small molecules to modulate T cell function , , . One approach is the development of cells engineered to have inducible suicide function so that they self-destruct upon addition of a small-molecule regulator , . Two suicide genes that have been effectively used are an inducible caspase 9 (iCasp9) , which initiates downstream apoptotic pathways once activated by a dimerizing small molecule, and herpes simplex virus thymidine kinase (HSV-TK) , which inhibits DNA synthesis when activated with the small molecule ganciclovir (Fig. ). A second approach to improving safety is expression of co-receptors that inhibit CAR T cell action against healthy cells . Antigen-specific inhibitory CARs contain an scFv directed to antigens expressed on healthy cells fused to the signalling domains of T cell inhibitory receptors, CTLA4 and PD1 (Fig. ). When bound to antigens indicative of healthy cells, CAR T cell action is inhibited. However, this approach is limited, as it is challenging to find cell surface markers that are unique to healthy cells. A third approach to controlling function is to modulate levels of functional CAR proteins at the cell surface , . A prime example of CAR surface expression control is the development of CAR T cells that can be reversibly paused after administration of a small-molecule ligand . These T cells express second-generation CARs fused to a ligand-induced degradation (LID) domain (Fig. ). Binding of the ligand to the LID domain induces the release of a cryptic degron, which results in selective CAR degradation; however, the T cells themselves still remain, unlike inducible suicide gene systems. Thus, the CAR T cells can resume activity when the ligand is removed, enabling precise and reversible control of CAR T cell function in a ligand concentration-dependent manner. The ideal CAR T cell will only be active upon recognition of cancer-specific antigens, but the lack of tumour-specific antigens and antigen-independent activation of CARs (termed tonic signalling) can lead to unwanted CAR T cell activation. Additionally, tumours can lose expression of antigens targeted by CARs , rendering the treatment useless. One approach for overcoming antigen specificity challenges is the use of cells that can recognize multiple antigens simultaneously, for example, using SynNotch-gated CARs . These comprise AND-gated , which means they are only activated after binding two tumour antigens (Fig. ). Binding of one scFv to its targeted tumour antigen triggers the translocation of a synthetic transcription factor to the nucleus, which causes expression of a CAR directed to a second tumour antigen. Affinity or avidity tuning of chimeric proteins is another form of AND-gating in both receptor and ligand design that can diminish the targeting of healthy antigen-presenting cells . CARs dimerize to effect their signalling in the cell; however, tonic signalling, whereby CARs dimerize without the presence of antigen, is an issue that currently approved CARs face . One example of avidity tuning is the AvidCAR T cell platform , which prevents unwanted dimerization and cell activation. This system employs monomeric CARs with low-affinity, single-domain antigen-binding domains (instead of an scFv) that rely on bivalent antigen engagement for dimerization and activation (Fig. ). Reduced affinity of the single-domain antigen-binding domain prevents constitutive CAR dimerization, and CAR signalling and effector function are only active when antigens are co-expressed on the same cell. This platform is thus an easily controllable and combinatorial system that better targets tumour cells co-expressing antigens rather than healthy surrounding tissue. Aside from AND gates, other logic gates, such as OR and NOT gates, have been engineered to recognize different combinations of surface antigens to increase tumour targeting over the targeting of healthy cells , , . Finally, an alternative approach to combat antigen escape is the development of universal CAR T cells, which can be altered to detect different antigens without having to entirely re-engineer the CAR – . These systems split the antigen-recognition domain and the co-stimulatory domains of conventional CARs into separate components. The first component is an engineered T cell expressing a universal CAR construct consisting of intracellular signalling domains and an extracellular adapter (instead of an scFv directed to the antigen of interest). The second component is a complementary adapter molecule that confers antigen-binding specificity and the modularity of the platform. One example of such an approach is the split, universal and programmable CAR (SUPRA CAR) platform , which concurrently addresses specificity, safety and ease of design (Fig. ). The SUPRA CAR system consists of a T cell that only expresses the T cell signalling domains of a CAR linked to an extracellular leucine zipper domain (zipCAR) and an adapter molecule, which is an scFv linked to a complementary leucine zipper (zipFv). The zipFv confers the antigen specificity of the CAR T cell, and different scFv leucine zippers can be easily injected into a patient without reinfusing cells. This modularity allows combinatorial logic and inhibition of CAR function depending on the zipCARs and zipFvs used. Although this review focuses on CAR T cells engineered to treat haematological cancers, CAR T cells are also being explored to treat solid tumours . These approaches have not been as successful in generating remission , and there are no currently approved CAR T cells that target solid tumours. Beyond the challenges discussed above, these therapies must overcome the immunosuppressive tumour microenvironment and impaired trafficking of CAR T cells into the tumour mass . Examples of CAR T cell modifications to circumvent these challenges are the expression of inflammatory cytokines, such as IL-12, that improve CAR persistence and activity and the co-expression of chemokine receptors, such as CCR4 (ref. ). Additionally, CARs can be used in combination with oncolytic viruses, which can cause direct tumour cell lysis or can generate an inflammatory immune response , , but these systems are beyond the scope of this Review. The majority of recent T cell engineering has focused on developing novel cancer therapeutics, yet CARs also have the potential to treat autoimmune disorders or to regulate ageing cells. By targeting cells that secrete inflammatory cytokines, CARs can dampen an overactive immune response. Recently, ‘senolytic’ CAR T cells were used to recognize and eliminate senescent cells . Eliminating senescent cells reduces inflammation and tissue damage and increases the healthspan – , suggesting that senolytic T cells could be a potential anti-ageing therapeutic. Additionally, introducing CARs into immune cells beyond T cells leverages the diversity of effector functions found across immune cell types (Box ). The resulting therapies could overcome some of the current limitations of CAR T cells and also broaden the scope of CAR cell applications to other immune disorders. Whereas recent mammalian theranostics have centred on therapeutic T cells, mammalian theranostic cells have also been developed for an array of other diseases , , , . Although these have not yet been developed for clinical use, they have shown great potential in preclinical studies, particularly for the treatment of autoimmune diseases. Cells to modulate the immune system To engineer cells with novel functions, synthetic biologists can piece together genetic elements from diverse cell types. A prime example of such engineering is the development of HEK293 cells for the treatment of psoriasis. These cells express anti-inflammatory cytokines upon recognition of psoriasis-specific inflammatory cytokines (Fig. ). The cytokines TNF and IL-22 are characteristic of psoriasis, but HEK293 cells only endogenously express one-half of the IL-22 receptor (IL-10RB). To endow HEK293 cells with the ability to recognize IL-22, the endogenous TNF-responsive pathway was engineered to control production of the other half of the IL-22 receptor (IL-22RA) . Then, binding of IL-22 to the expressed receptor triggers the IL-22 signalling cascade. This pathway was rewired to control production of the two anti-inflammatory cytokines IL-4 and IL-10 (Fig. ). The resulting cells successfully reduce inflammation upon sensing the psoriatic phenotype, and in mouse models they prevent onset of psoriatic flares and attenuate acute psoriasis . Thus, the researchers rewired an endogenous pathway (TNF signalling) to express a component (IL-22 receptor) from a different cell type to generate novel responses in the theranostic cell. Using similar engineering tactics, other theranostic cells have been developed to treat diverse diseased states such as diabetes , , methicillin-resistant Staphylococcus aureus allergy and inflammation , . Engineered stem cells for regenerative medicine Genetically engineered mesenchymal stem cells are powerful tools for tissue regeneration and gene therapy, and using synthetic biology approaches to control their activity could expand their regenerative applications. For example, engineering cells to overexpress IL-1 receptor antagonist (IL-1RA) can dampen an overactive, inflammatory immune response, and simultaneously expressing vascular endothelial growth factor (VEGF) promotes angiogenesis, both critical components of tissue regeneration . Whereas delivery of VEGF-encoding DNA produces angiogenic effects – , stimulation of the native VEGF promoter triggers more robust angiogenesis . This is thought to be due to post-transcriptional processing leading to multiple splice variants of VEGF, which provide a more comprehensive set of angiogenic stimuli. In line with these results, transgenic expression of survivin, an enhancer of VEGF production, accelerates myocardial healing post infarction . Because mammalian cells have complex post-transcriptional processing that is difficult to control (that is, alternative RNA splicing) , stimulating native cytokine expression may not be sufficient to achieve the desired phenotype; alternatively, exogenous expression of cytokines can be more rationally engineered and more tightly controlled (Fig. ). Ectopic transcription factor expression (expression of transcription factors not normally present in a cell type) can also modify cell behaviour, most potently demonstrated by the use of Yamanaka factors for generating induced pluripotent stem cells . Since then, expression of other transcription factors has expanded the scope of mesenchymal stem cell therapies. For example, overexpression of HIF1α enhances haematopoietic growth factor production, which could increase the success of bone marrow mesenchymal stem cell transplants . As engineered receptors, transcription factors and cytokines are connected and regulated in more complex pathways, mammalian synthetic biologists will be able to better control cell function and create new types of therapeutic cell with more potent regenerative capabilities. To engineer cells with novel functions, synthetic biologists can piece together genetic elements from diverse cell types. A prime example of such engineering is the development of HEK293 cells for the treatment of psoriasis. These cells express anti-inflammatory cytokines upon recognition of psoriasis-specific inflammatory cytokines (Fig. ). The cytokines TNF and IL-22 are characteristic of psoriasis, but HEK293 cells only endogenously express one-half of the IL-22 receptor (IL-10RB). To endow HEK293 cells with the ability to recognize IL-22, the endogenous TNF-responsive pathway was engineered to control production of the other half of the IL-22 receptor (IL-22RA) . Then, binding of IL-22 to the expressed receptor triggers the IL-22 signalling cascade. This pathway was rewired to control production of the two anti-inflammatory cytokines IL-4 and IL-10 (Fig. ). The resulting cells successfully reduce inflammation upon sensing the psoriatic phenotype, and in mouse models they prevent onset of psoriatic flares and attenuate acute psoriasis . Thus, the researchers rewired an endogenous pathway (TNF signalling) to express a component (IL-22 receptor) from a different cell type to generate novel responses in the theranostic cell. Using similar engineering tactics, other theranostic cells have been developed to treat diverse diseased states such as diabetes , , methicillin-resistant Staphylococcus aureus allergy and inflammation , . Genetically engineered mesenchymal stem cells are powerful tools for tissue regeneration and gene therapy, and using synthetic biology approaches to control their activity could expand their regenerative applications. For example, engineering cells to overexpress IL-1 receptor antagonist (IL-1RA) can dampen an overactive, inflammatory immune response, and simultaneously expressing vascular endothelial growth factor (VEGF) promotes angiogenesis, both critical components of tissue regeneration . Whereas delivery of VEGF-encoding DNA produces angiogenic effects – , stimulation of the native VEGF promoter triggers more robust angiogenesis . This is thought to be due to post-transcriptional processing leading to multiple splice variants of VEGF, which provide a more comprehensive set of angiogenic stimuli. In line with these results, transgenic expression of survivin, an enhancer of VEGF production, accelerates myocardial healing post infarction . Because mammalian cells have complex post-transcriptional processing that is difficult to control (that is, alternative RNA splicing) , stimulating native cytokine expression may not be sufficient to achieve the desired phenotype; alternatively, exogenous expression of cytokines can be more rationally engineered and more tightly controlled (Fig. ). Ectopic transcription factor expression (expression of transcription factors not normally present in a cell type) can also modify cell behaviour, most potently demonstrated by the use of Yamanaka factors for generating induced pluripotent stem cells . Since then, expression of other transcription factors has expanded the scope of mesenchymal stem cell therapies. For example, overexpression of HIF1α enhances haematopoietic growth factor production, which could increase the success of bone marrow mesenchymal stem cell transplants . As engineered receptors, transcription factors and cytokines are connected and regulated in more complex pathways, mammalian synthetic biologists will be able to better control cell function and create new types of therapeutic cell with more potent regenerative capabilities. Although engineered cells show great promise for changing treatment paradigms, many challenges must be addressed to ensure their widespread clinical approval and success. A primary challenge for engineered bacterial cells is the current dearth of sensors available for many physiologically relevant compounds. Most bacterial diagnostic and theranostic approaches rely on finding existing proteins that interact with and respond to the molecule of interest. This bioprospecting approach works well when sensors are available for a given target, but it is not widely generalizable. A platform that uses modular sensors such as aptamers or antibody fragments, which can be evolved to specifically bind a target molecule with a user-defined affinity , , could greatly expand the scope of bacterial diagnostics and theranostics. Another challenge for bacterial theranostics is the precise control of cellular output. Bacteria can be engineered to produce and secrete small-molecule and protein drugs, but it is difficult to tightly control the amounts of drugs that are produced ; such control is critical, especially for drugs that have small therapeutic windows. Cells that integrate production over time or can sense external levels of the produced drug could be used to effectively titrate drug levels. As in silico design approaches make it easier to build robust and complex circuits , it is becoming more feasible to build such complex systems. In mammalian cell engineering, the high cost and logistical challenges associated with CAR T cell treatment, in addition to potentially lethal adverse effects, have limited their use to a ‘last-resort’ option. Manufacturing a single dose of tisagenlecleucel costs upwards of US$40,000, and other costs associated with treatment have driven the cost to $475,000 per person, making it the most expensive cancer therapy to date . The need to individually engineer each patient’s cells ex vivo is largely responsible for the high cost. A promising alternative, currently in clinical trials, is the use of T cells, which could serve as ‘off-the-shelf’ cell therapies, eliminating the need to engineer custom therapies for each patient , . However, the use of allogeneic cells increases the risk of graft-versus-host disease , necessitating further genetic engineering to limit rejection. Another potential way to reduce therapy costs and manufacturing challenges would be to directly inject genetic circuits into patients, rather than into their isolated cells, which is feasible when the tissue can be accessed directly; for example, in the case of retinal tissue, voretigene neparvovec (Luxturna) is directed directly into the eye and was recently approved as a gene therapy against vision loss . Advances in targeting adeno-associated virus vectors to specific tissues and cell types could help provide similarly specific gene delivery to tissues that are inaccessible by injections . As theranostic cells become more potent, it will be critical to monitor their impact on the surrounding tissue. For example, incorporation of CARs into diverse immune cell types will lead to changes in the local microenvironment around the target cell, which are good for the immediate treatment goal but could have potentially detrimental long-term consequences. As these therapies become more developed, it will be critical to assess the reversibility of these changes and then develop engineering approaches to enable better control. Finally, both bacterial and mammalian cell therapies must overcome negative public opinion, which is fuelled both by the stigma of using genetically modified organisms and by previous failures and breaches of scientific ethics during the use of genetic therapies. Positive branding and public campaigns highlighting the safety and health benefits of diagnostic and therapeutic bacteria could help to alter public opinion on the use of engineered cells. Expanding mammalian cell-based treatments to non-terminal diseases will require extensive demonstration of their safety and benefits over more traditional treatments. Taken together, recently developed cellular diagnostics and therapeutics have shown that synthetic biology has real potential to transform health-care paradigms. Cell-based therapies have rapidly progressed through clinical trials and regulatory approval, and are thus emerging as an alternative treatment modality to existing small-molecule drugs and protein biologics. The expanding cellular engineering toolbox will lead to more sensitive diagnostics and novel therapeutics for previously intractable diseases. |
Multicenter Testing of a Simple Molecular Diagnostic System for the Diagnosis of Mycobacterium Tuberculosis | 323cda1b-d708-4ea5-846e-dc9cf0eefba8 | 9954000 | Pathology[mh] | Tuberculosis (TB) is a communicable disease caused by Mycobacterium tuberculosis (MTB), which can spread through the air, for example, by coughing. TB is a major cause of health deterioration and death worldwide . TB affects multiple organs; it can be classified as a multisystem infectious disease. The infection rate is so high that about a quarter of the world’s total population is estimated to be infected with tuberculosis . In 2015, the United Nations established 17 Sustainable Development Goals, which include ending tuberculosis by 2030. However, tuberculosis research is relatively obscured by the allocation of resources such as manpower, laboratories, and clinical services for diseases such as HIV, malaria, and now, COVID-19 . TB, like other communicable diseases, requires rapid diagnosis and treatment in the early stages. Tuberculosis diagnostic tests have improved significantly in recent years and may involve chest X-ray, tuberculin test, microscopic observations of sputum smears, bronchoscopy, CT (computed tomography) scan, sputum culture, and tissue biopsy analysis. Among these, acid-fast bacilli (AFB) smear microscopy and bacterial culture are used as the gold standards . However, the sensitivity of pulmonary tuberculosis identification through microscopic smear tests is estimated to be up to 70% . The mycobacterial culture method requires 2–8 weeks for cultivating the bacteria, and there is a limit to the minimum number of bacteria that can be detected . The analysis of sputum after acid-fast (AF) staining is faster and easier for the diagnosis of pulmonary TB compared with sputum culture. AF staining is cost-effective, relatively simple, and fast. Nevertheless, sample processing, the thickness of the smears, the preparation and conservation of the reagents, the quality of the microscopes, the duration of the primary and counterstaining incubations, as well as the expertise of the technical staff affect the sensitivity and specificity of AF staining . Additional studies have been conducted to achieve better efficiency using the existing methods. For instance, digital chest x-rays with the computer-aided detection of tuberculosis have been increasingly used in various settings. However, this technique still needs improvements in computer-aided detection . Currently, several molecular diagnostic technologies are available for the detection of TB with high sensitivity and specificity. Nucleic acid amplification tests (NAATs) particularly emerged as an alternative to traditional methods as they are faster and easier to apply . Several fast molecular tests are recommended by the WHO as initial diagnostic tests for TB, such as Xpert MTB/RIF Ultra (Ultra) (Cepheid, Sunnyvale, CA, USA) and Truenat MTB/RIF (Molbio Diagnostics, Verna, India), some of which can simultaneously detect drug resistance . However, there are still many regions in the world that rely heavily on outdated tuberculosis diagnostic tests . These methods are still used despite their poor sensitivity as the cost and infrastructure requirements of molecular testing methods do not allow scaling up, and the available resources of molecular testing are often underutilized . To address the limitations of conventional diagnostic technologies, point-of-care testing (POCT)-based technologies have been researched. POCT performs well as a simple, rapid, low-cost method even in resource-limited environments. POCT has been extensively studied in relation to biosensors , microfluidic systems , and lipoarabinomannan (LAM) tests . In this study, we report a new molecular diagnostic system for TB detection. This system was developed by integrating a simple sample preparation technology with a DNA detection technology. For sample preparation, we used a simple DNA extraction method comprising a syringe and a syringe filter with homobifunctional imidoesters (HIs), like dimethyl suberimidate (DMS), that bind to amine-functionalized diatomaceous earth and amine groups . This sample preparation system enables DNA extraction in about 1–2 h depending on the sample volume (up to 10 mL) with inexpensive and robust. This technology has been proven effective as it involves simple procedures and does not require any instruments. Then, the extracted DNA was analyzed by quantitative PCR using specific primers. Additionally, we validated the clinical utility of the system in 88 sputum samples collected from patients. Therefore, this system provides a diagnosis method that is more rapid and simple than the traditional TB diagnostic assays, such as Xpert MTB/RIF, MTB PCR, and mycobacterial culture. 2.1. Chemicals and Reagents Hyflo Super Cel (Diatomaceous earth), 3-aminopropyl(diethoxy)methylsilane (APDMS, 97%), dimethyl suberimidate dihydrochloride (DMS, 98%), lysozyme solution (50 mg/mL in distilled water), sodium hydroxide solution (50% in H 2 O), N-Acetyl-L-cysteine (NALC, 99%), sodium citrate, and Triton X-100 were purchased from Sigma-Aldrich (St. Louis, MO, USA). Tris-HCI (pH 8.0), distilled water (DNase/RNase-Free), and EDTA (pH 8.0) were purchased from Invitrogen (Waltham, MA, USA). Proteinase K solution (>600 mAU/mL) was purchased from Qiagen (Hilden, Germany). Absolute ethanol was purchased from Merck (Whitehouse Station, NJ, USA). Phosphate-buffered saline (PBS; 10×, pH 7.4) was purchased from Gibco (Grand Island, NY, USA). 2.2. Synthesis of Amine-Functionalized Diatomaceous Earth(D-APDMS) D-APDMS used in the NA extraction processes was prepared as follows . Diatomaceous earth (DE) was washed with distilled water (DW) for 30 min with stirring. The sediment containing impurities was removed after a short period of settling under gravity. APDMS was used to prepare D-APDMS. Briefly, 5 mL of APDMS was pipetted dropwise into 100 mL 95% ( v / v ) ethanol solution, which was acidified with acetic acid (pH 5) and combined with 2 g DE. The mixture was incubated for 4 h at room temperature (RT) with stirring. Then, D-APDMS was washed with ethanol, dried under vacuum overnight, and stored at RT until use. 2.3. Filter-Based Nucleic Acid Extraction D-APDMS and DMS were used as the matrix for bacterial DNA extraction. First, 30 μL of lysozyme was mixed with 1.5 mL of sample, and the mixture was incubated for 1 h at 37 °C. After incubation, 1 mL of D-APDMS suspension (60 mg/mL in 10 mM Tris-HCl buffer at pH 7.0), 1 mL of DMS solution (100 mg/mL in 70% ethanol), 1 mL of GTIC lysis solution (4 M GITC, 55 mM Tris-HCI, 25 mM EDTA, 3% Triton X-100 in distilled water), and 50 μL of Proteinase K were pipetted into the sample solution. Then, the mixture was incubated for 30 min at 56 °C and 15 min at 95 °C for NA extraction. During the incubation, a hydrophobic PTFE syringe filter (25 mm, 3.0 μm, Hawach Scientific, Xi’an, China) was washed with 1 mL PBS. The incubated mixture was transferred into a syringe filter and then washed with 2 mL of PBS using the syringe. Finally, 150 μL of elution buffer (10 mM Tris-HCl, pH 10.0) was added into the syringe filter and, after 1 min of incubation at RT, the elution buffer containing NAs was collected, and the extracted DNA was stored at −20 °C until use. 2.4. Nucleic Acid Detection Method Isolated DNA was analyzed by quantitative PCR to examine the efficiency of the sample preparation process using D-APDMS. Quantitative PCR conditions were as follows: an initial denaturation step at 95 °C for 15 min; 45 cycles of incubation at 95 °C for 10 s, at 63 °C for 20 s, and at 72 °C for 20 s; and melting steps at 95 °C for 30 s, at 65 °C for 30 s, and at 95 °C for 30 s. Amplification was performed in a total volume of 20 µL reaction mixture containing 5 µL of DNA, 10 µL of AccuPower 2× GreenStar qPCR Master Mix (Bioneer, Daejeon, Republic of Korea), and 2.5 µM of each primer. We performed conventional PCR, quantitative PCR, and recombinase polymerase amplification (RPA) to determine the quality of DNA extracted from TB. The conventional PCR cycling conditions were as follows: an initial denaturation step at 95 °C for 15 min; 40 cycles of incubation at 95 °C for 30 s, 58 °C for 30 s, and 72 °C for 30 s; and a final extension step at 72 °C for 5 min. The mixture included 5 µL of DNA in a total volume of 25 μL containing PCR buffer (10×, Qiagen), 2.5 mM MgCl 2 , 0.25 mM deoxynucleotide triphosphate, 25 pM of each primer, one unit of Taq DNA polymerase (Qiagen), and deionized (DI) water. The quantitative PCR conditions were as follows: an initial denaturation step at 95 °C for 30 s; 40 cycles of incubation at 95 °C for 5 s, 60 °C for 30 s; and cooling at 40 °C for 30 s. Amplification reactions contained 5 μL of RNA and were performed with LightCycler ® Multiplex RNA Virus Master (Roche, Mannheim, Germany). PCR products were analyzed by electrophoresis on a 2% agarose gel. The gel was visualized using a ChemiDoc XRS+ System (Bio-Rad, Hercules, CA, USA). The RPA reaction was performed using 3 μL of RNA and a TwistAmp ® RT Basic kit (TwistDx, Cambridge, UK) for 25 min at 40 °C. RPA products were analyzed on a 2% agarose gel and by lateral flow assay (LFA) using a Milenia HybriDetect 1 kit (TwistDx). Clinical samples of TB were confirmed using the PrimeraTM TB/MDR-TB Detection Kit (Cat Nr. PRT021 Infusion Tech, Gyeonggi-do, the Republic of Korea) according to the manufacturer’s instructions. Conventional PCR and RPA were performed on a T100 Thermal Cycler (Bio-Rad, Hercules, CA, USA). All quantitative PCR assays were performed on a CFX96 Touch Real-Time PCR Detection System (Bio-Rad). 2.5. Bacteria Samples and Clinical Samples To investigate the capacity of D-APDMS with syringe filter assays for bacterial cells, we used the extracted DNA from Brucella ovis (ATCC 25840) cells, which were grown in Brucella agar containing 5% defibrinated sheep blood and incubated at 37 °C in (5% CO 2 ) for 48 h. All patients with suspected pulmonary TB (PTB) who consented to the use of their sputum for additional tests, such as the TB diagnostic platform , were prospectively enrolled at a 2700-bed tertiary-care facility in Republic of Korea (Severance Hospital and Asan Medical Center, Dongguk University Ilsan Hospital, Yongin Severance Hospital, IRB 4-2020-1177. 2018-0020, 2020-1745, 2021-03-032-003, 9-2020-0166), and the protocol of this study was registered at clinicaltrials.gov (NCT03423550). The suspicion of PTB was based on the participants’ symptoms, history, and radiographic findings suggestive of TB. The enrollment was decided by five respiratory and infection specialists (E.H.L., Y.S.Y., S.H.K., Y.A.K., and S.W.L.), who had more than 15 years of experience in TB treatment. One volume of liquefaction buffer (5% NaOH, 4% NALC, and 1.5% sodium citrate in distilled water) was added to the collected clinic sputum sample. Then, it was stored at −20 °C until use. Hyflo Super Cel (Diatomaceous earth), 3-aminopropyl(diethoxy)methylsilane (APDMS, 97%), dimethyl suberimidate dihydrochloride (DMS, 98%), lysozyme solution (50 mg/mL in distilled water), sodium hydroxide solution (50% in H 2 O), N-Acetyl-L-cysteine (NALC, 99%), sodium citrate, and Triton X-100 were purchased from Sigma-Aldrich (St. Louis, MO, USA). Tris-HCI (pH 8.0), distilled water (DNase/RNase-Free), and EDTA (pH 8.0) were purchased from Invitrogen (Waltham, MA, USA). Proteinase K solution (>600 mAU/mL) was purchased from Qiagen (Hilden, Germany). Absolute ethanol was purchased from Merck (Whitehouse Station, NJ, USA). Phosphate-buffered saline (PBS; 10×, pH 7.4) was purchased from Gibco (Grand Island, NY, USA). D-APDMS used in the NA extraction processes was prepared as follows . Diatomaceous earth (DE) was washed with distilled water (DW) for 30 min with stirring. The sediment containing impurities was removed after a short period of settling under gravity. APDMS was used to prepare D-APDMS. Briefly, 5 mL of APDMS was pipetted dropwise into 100 mL 95% ( v / v ) ethanol solution, which was acidified with acetic acid (pH 5) and combined with 2 g DE. The mixture was incubated for 4 h at room temperature (RT) with stirring. Then, D-APDMS was washed with ethanol, dried under vacuum overnight, and stored at RT until use. D-APDMS and DMS were used as the matrix for bacterial DNA extraction. First, 30 μL of lysozyme was mixed with 1.5 mL of sample, and the mixture was incubated for 1 h at 37 °C. After incubation, 1 mL of D-APDMS suspension (60 mg/mL in 10 mM Tris-HCl buffer at pH 7.0), 1 mL of DMS solution (100 mg/mL in 70% ethanol), 1 mL of GTIC lysis solution (4 M GITC, 55 mM Tris-HCI, 25 mM EDTA, 3% Triton X-100 in distilled water), and 50 μL of Proteinase K were pipetted into the sample solution. Then, the mixture was incubated for 30 min at 56 °C and 15 min at 95 °C for NA extraction. During the incubation, a hydrophobic PTFE syringe filter (25 mm, 3.0 μm, Hawach Scientific, Xi’an, China) was washed with 1 mL PBS. The incubated mixture was transferred into a syringe filter and then washed with 2 mL of PBS using the syringe. Finally, 150 μL of elution buffer (10 mM Tris-HCl, pH 10.0) was added into the syringe filter and, after 1 min of incubation at RT, the elution buffer containing NAs was collected, and the extracted DNA was stored at −20 °C until use. Isolated DNA was analyzed by quantitative PCR to examine the efficiency of the sample preparation process using D-APDMS. Quantitative PCR conditions were as follows: an initial denaturation step at 95 °C for 15 min; 45 cycles of incubation at 95 °C for 10 s, at 63 °C for 20 s, and at 72 °C for 20 s; and melting steps at 95 °C for 30 s, at 65 °C for 30 s, and at 95 °C for 30 s. Amplification was performed in a total volume of 20 µL reaction mixture containing 5 µL of DNA, 10 µL of AccuPower 2× GreenStar qPCR Master Mix (Bioneer, Daejeon, Republic of Korea), and 2.5 µM of each primer. We performed conventional PCR, quantitative PCR, and recombinase polymerase amplification (RPA) to determine the quality of DNA extracted from TB. The conventional PCR cycling conditions were as follows: an initial denaturation step at 95 °C for 15 min; 40 cycles of incubation at 95 °C for 30 s, 58 °C for 30 s, and 72 °C for 30 s; and a final extension step at 72 °C for 5 min. The mixture included 5 µL of DNA in a total volume of 25 μL containing PCR buffer (10×, Qiagen), 2.5 mM MgCl 2 , 0.25 mM deoxynucleotide triphosphate, 25 pM of each primer, one unit of Taq DNA polymerase (Qiagen), and deionized (DI) water. The quantitative PCR conditions were as follows: an initial denaturation step at 95 °C for 30 s; 40 cycles of incubation at 95 °C for 5 s, 60 °C for 30 s; and cooling at 40 °C for 30 s. Amplification reactions contained 5 μL of RNA and were performed with LightCycler ® Multiplex RNA Virus Master (Roche, Mannheim, Germany). PCR products were analyzed by electrophoresis on a 2% agarose gel. The gel was visualized using a ChemiDoc XRS+ System (Bio-Rad, Hercules, CA, USA). The RPA reaction was performed using 3 μL of RNA and a TwistAmp ® RT Basic kit (TwistDx, Cambridge, UK) for 25 min at 40 °C. RPA products were analyzed on a 2% agarose gel and by lateral flow assay (LFA) using a Milenia HybriDetect 1 kit (TwistDx). Clinical samples of TB were confirmed using the PrimeraTM TB/MDR-TB Detection Kit (Cat Nr. PRT021 Infusion Tech, Gyeonggi-do, the Republic of Korea) according to the manufacturer’s instructions. Conventional PCR and RPA were performed on a T100 Thermal Cycler (Bio-Rad, Hercules, CA, USA). All quantitative PCR assays were performed on a CFX96 Touch Real-Time PCR Detection System (Bio-Rad). To investigate the capacity of D-APDMS with syringe filter assays for bacterial cells, we used the extracted DNA from Brucella ovis (ATCC 25840) cells, which were grown in Brucella agar containing 5% defibrinated sheep blood and incubated at 37 °C in (5% CO 2 ) for 48 h. All patients with suspected pulmonary TB (PTB) who consented to the use of their sputum for additional tests, such as the TB diagnostic platform , were prospectively enrolled at a 2700-bed tertiary-care facility in Republic of Korea (Severance Hospital and Asan Medical Center, Dongguk University Ilsan Hospital, Yongin Severance Hospital, IRB 4-2020-1177. 2018-0020, 2020-1745, 2021-03-032-003, 9-2020-0166), and the protocol of this study was registered at clinicaltrials.gov (NCT03423550). The suspicion of PTB was based on the participants’ symptoms, history, and radiographic findings suggestive of TB. The enrollment was decided by five respiratory and infection specialists (E.H.L., Y.S.Y., S.H.K., Y.A.K., and S.W.L.), who had more than 15 years of experience in TB treatment. One volume of liquefaction buffer (5% NaOH, 4% NALC, and 1.5% sodium citrate in distilled water) was added to the collected clinic sputum sample. Then, it was stored at −20 °C until use. 3.1. Principles of TB Molecular Diagnostic System The process of the tuberculosis molecular diagnostic system proceeds in five steps comprising sample preparation and detection stages . First, sputum samples are collected from the patients ( (1)) and mixed with the liquefaction buffer to obtain a liquid sample. Then, lysozyme is added to disrupt the cell membrane, and the samples are incubated at 37 °C for 60 min. Samples in large volumes (up to 10 mL) can be used for TB diagnosis without requiring additional instruments. Second, in the bacteria lysis step ( (2)), the lysis buffer is added with D-APDMS and DMS, and the samples are incubated for 30 min at 56 °C and 15 min at 95 °C for bacteria lysis. During the lysis step, DMS is added to bind between DNA and D-APDMS. DMS has imido groups on both sides, which bind to the amine groups on the surface of D-APDMS on one side and the amine groups on DNA on the other side by covalent bonding. Since DNA has a negative charge, it also participates in electrostatic interactions. These interactions are stably maintained at pH 8. Third, a PTFE syringe filter is used to separate the DNA-bound D-APDMS ( (3)). The size of DE ranges from 200 nm to 3 μm, which is bigger than the filter membrane. Therefore, the D-APDMS is collected in the membrane, and other unnecessary substances are removed by washing with PBS. Fourth, the elution buffer is used to break the interactions. The extracted DNA is simply collected ( (4)) and confirmed by quantitative PCR ( (5)). This TB molecular diagnostic system enables nucleic acid extraction and detection within 3 h without requiring chaotropic agents or sophisticated instruments. 3.2. Optimization and Application of D-APDMS with Syringe Filter Prior to the use of the TB molecular diagnostic system, an optimization experiment for the use of D-APDMS with a syringe filter was performed. The performance of D-APDMS in sample preparation was confirmed through experiments using amine-functionalized diatomaceous earth . All optimization experiments were performed with the same concentration of Brucella ovis 10 5 CFU/mL, and the efficiency of the system was compared with that of a commercial column kit. Several conditions were tested in the lysis steps. SDS-based lysis buffer and GITC-based lysis buffer were compared ( A), and the GITC-based lysis buffer was found to be more effective as indicated by lower Ct values. Then, the incubation duration of the lysis step was optimized ( B), and it was observed that a 30 min incubation (56 °C) produced better results than a 45 min incubation. To further increase the extraction efficiency, we tried combining the thermal lysis method with the existing chemical lysis method. We compared the efficiencies of the following lysis procedures by examining the Ct values: (i) 30 min incubation at 56 °C followed by an additional 15 min at 95 °C and (ii) a single step of 30 min incubation at 56 °C. The former, i.e., the extra 15 min incubation at 95 °C after the 30 min incubation at 56 °C resulted in superior efficiency. Next, the optimization of the lysozyme, which breaks the bacterial cell membrane, was also performed ( C). Since the optimum temperature for lysozyme activity was 37 °C, the lysozyme step (1 h at 37 °C) was added before the lysis step. Compared to the addition of lysozyme before or during lysis, the use of lysozyme before the lysis step resulted in better efficiency ( C). Taken together, it was confirmed the better efficiency of the system to enrich and extract DNA at 56 °C for 30 min and 95 °C for 15 min in a GITC-based lysis buffer. In addition, a lysozyme step was added before the lysis step to increase the efficiency of the cell lysis step. Under these conditions, bacteria enrichment and the DNA extraction process were conducted using a syringe filter and D-APDMS was determined. Next, we found that the syringe filter was blocked by various contaminants during the addition of sputum samples. We performed DNA extraction using filter membranes with a pore size of either 1.0 μm or 3.0 μm ( D). The extracted DNA was analyzed by quantitative PCR. ERV3 gene was used as an internal control; IS6110 and rpoB genes were selected as TB targets. No amplification was observed in quantitative PCR in samples prepared using the 1.0 μm filter, presumably because the filter was blocked. On the other hand, efficient amplification with the expected Ct values was obtained with the 3.0 μm filter. Consequently, it was determined to conduct the experiments on the clinical samples using 3.0 μm filters. Next, we tested the extraction capacity of the system using various concentrations of bacteria ranging from 10 3 CFU to 10 7 CFU per 1.5 mL ( A). DNA was successfully extracted from samples containing both low and high concentrations of bacteria ( A), and no difference in extraction efficiency was found between the proposed system and a commercial spin column kit. In addition, we also performed tests with the same concentration (Brucella ovis 10 5 CFU/mL) of bacteria at different volumes ranging from 0.5 to 10 mL. DNA extraction was possible from both small and large volumes of samples (up to 10 mL; B), suggesting that our method might overcome the limitations of commercial kits that are applicable to limited volumes of samples. Consistent with previous studies, the limit of detection can be enhanced when a large volume of sample used during sample preparation . This proposed system can be enriched and extracted DNA from a large volume of samples. Based on this advantage of this system, it can show better clinical sensitivity for the detection of MTB. 3.3. Testing of Various DNA Amplification Methods for TB Molecular Diagnostics To determine the optimal method for TB molecular diagnostics, we tested the detection limits of various amplification methods including conventional PCR, quantitative PCR, RPA, and paper-based lateral flow assay (LFA). We used ten-fold serially diluted DNA samples ranging from 1 to 10 8 copies. The products of conventional PCR, quantitative PCR, and RPA were analyzed on a 2% agarose gel. We found that the detection limits of conventional PCR and quantitative PCR were the same (10 2 copies; A). The gene IS6110 was used as a marker of TB, and the gene ERV3 was used as an internal control. Based on this, we drew a standard curve . When the detection limit was calculated for a Ct value of 38, which is the standard set by the company, the resultant standard curve indicated a detection limit of about nine copies for the two genes. On the contrary, RPA has the advantages of desirable isothermal properties and speed, but it has a higher detection limit (10 3 copies). These results are likely to be sufficient for in-field diagnosis using RPA, but the efficiency was slightly reduced ( B). Additionally, for the possibility of an in-field diagnosis of TB, LFA was performed using the RPA product ( C), which exhibited a detection limit similar to that of RPA. Compared with testing of various detection methods with the D-APDMS platform, RPA and LFA methods are still required for improvement of the efficiency. Based on the test results of the detection methods, we decided to proceed with conventional PCR and quantitative PCR, which showed the best efficiency. Although quantitative PCR has the same detection limit as conventional PCR, it does not require additional steps, such as 2% agarose gel electrophoresis, and takes about 1 h, which is shorter than conventional PCR (>2 h). Therefore, quantitative PCR was selected as the detection method for the TB molecular diagnostic system. 3.4. Utility of the TB Molecular Diagnostic System on Clinical Samples To validate the clinical utility of the system, we examined 88 sputum samples obtained from patients at four hospitals in the Republic of Korea (Severance Hospital and Asan Medical Center, Dongguk University Ilsan Hospital, Yongin Severance Hospital). The sputum samples were analyzed through the TB molecular diagnostic system consisting of a DNA extraction (D-APDMS with syringe filter) step followed by a DNA detection (quantitative PCR) step. A shows the Ct values obtained. The test result was considered to be negative for Ct values of 38 or higher and positive for Ct values below 38. All samples were tested using the TB molecular diagnostic system, Xpert MTB/RIF, MTB PCR, AFB smear, and mycobacterial culture assays . The clinical samples consisted of 29 TB-positive and 59 TB-negative samples, as determined by results of mycobacterial culture (culture positive and culture negative). TB molecular diagnostic system detected 21 out of 29 samples as true positives and 44 out of 59 samples as true negatives. Thus, the sensitivity and specificity of the assay were 72.41% and 74.58%, respectively. The sensitivities of Xpert MTB/RIF, MTB PCR, and AFB smear were 50%, 27.27%, and 13.79%, respectively. Nevertheless, the specificity was 100% for all TB assays. Of note, the sensitivity of the proposed system was about 300–500% higher than those of the MTB PCR and AFB smear assays. Overall, the data indicated that the sensitivity of the TB molecular diagnostic system was superior to those of conventional TB diagnosis assays. The process of the tuberculosis molecular diagnostic system proceeds in five steps comprising sample preparation and detection stages . First, sputum samples are collected from the patients ( (1)) and mixed with the liquefaction buffer to obtain a liquid sample. Then, lysozyme is added to disrupt the cell membrane, and the samples are incubated at 37 °C for 60 min. Samples in large volumes (up to 10 mL) can be used for TB diagnosis without requiring additional instruments. Second, in the bacteria lysis step ( (2)), the lysis buffer is added with D-APDMS and DMS, and the samples are incubated for 30 min at 56 °C and 15 min at 95 °C for bacteria lysis. During the lysis step, DMS is added to bind between DNA and D-APDMS. DMS has imido groups on both sides, which bind to the amine groups on the surface of D-APDMS on one side and the amine groups on DNA on the other side by covalent bonding. Since DNA has a negative charge, it also participates in electrostatic interactions. These interactions are stably maintained at pH 8. Third, a PTFE syringe filter is used to separate the DNA-bound D-APDMS ( (3)). The size of DE ranges from 200 nm to 3 μm, which is bigger than the filter membrane. Therefore, the D-APDMS is collected in the membrane, and other unnecessary substances are removed by washing with PBS. Fourth, the elution buffer is used to break the interactions. The extracted DNA is simply collected ( (4)) and confirmed by quantitative PCR ( (5)). This TB molecular diagnostic system enables nucleic acid extraction and detection within 3 h without requiring chaotropic agents or sophisticated instruments. Prior to the use of the TB molecular diagnostic system, an optimization experiment for the use of D-APDMS with a syringe filter was performed. The performance of D-APDMS in sample preparation was confirmed through experiments using amine-functionalized diatomaceous earth . All optimization experiments were performed with the same concentration of Brucella ovis 10 5 CFU/mL, and the efficiency of the system was compared with that of a commercial column kit. Several conditions were tested in the lysis steps. SDS-based lysis buffer and GITC-based lysis buffer were compared ( A), and the GITC-based lysis buffer was found to be more effective as indicated by lower Ct values. Then, the incubation duration of the lysis step was optimized ( B), and it was observed that a 30 min incubation (56 °C) produced better results than a 45 min incubation. To further increase the extraction efficiency, we tried combining the thermal lysis method with the existing chemical lysis method. We compared the efficiencies of the following lysis procedures by examining the Ct values: (i) 30 min incubation at 56 °C followed by an additional 15 min at 95 °C and (ii) a single step of 30 min incubation at 56 °C. The former, i.e., the extra 15 min incubation at 95 °C after the 30 min incubation at 56 °C resulted in superior efficiency. Next, the optimization of the lysozyme, which breaks the bacterial cell membrane, was also performed ( C). Since the optimum temperature for lysozyme activity was 37 °C, the lysozyme step (1 h at 37 °C) was added before the lysis step. Compared to the addition of lysozyme before or during lysis, the use of lysozyme before the lysis step resulted in better efficiency ( C). Taken together, it was confirmed the better efficiency of the system to enrich and extract DNA at 56 °C for 30 min and 95 °C for 15 min in a GITC-based lysis buffer. In addition, a lysozyme step was added before the lysis step to increase the efficiency of the cell lysis step. Under these conditions, bacteria enrichment and the DNA extraction process were conducted using a syringe filter and D-APDMS was determined. Next, we found that the syringe filter was blocked by various contaminants during the addition of sputum samples. We performed DNA extraction using filter membranes with a pore size of either 1.0 μm or 3.0 μm ( D). The extracted DNA was analyzed by quantitative PCR. ERV3 gene was used as an internal control; IS6110 and rpoB genes were selected as TB targets. No amplification was observed in quantitative PCR in samples prepared using the 1.0 μm filter, presumably because the filter was blocked. On the other hand, efficient amplification with the expected Ct values was obtained with the 3.0 μm filter. Consequently, it was determined to conduct the experiments on the clinical samples using 3.0 μm filters. Next, we tested the extraction capacity of the system using various concentrations of bacteria ranging from 10 3 CFU to 10 7 CFU per 1.5 mL ( A). DNA was successfully extracted from samples containing both low and high concentrations of bacteria ( A), and no difference in extraction efficiency was found between the proposed system and a commercial spin column kit. In addition, we also performed tests with the same concentration (Brucella ovis 10 5 CFU/mL) of bacteria at different volumes ranging from 0.5 to 10 mL. DNA extraction was possible from both small and large volumes of samples (up to 10 mL; B), suggesting that our method might overcome the limitations of commercial kits that are applicable to limited volumes of samples. Consistent with previous studies, the limit of detection can be enhanced when a large volume of sample used during sample preparation . This proposed system can be enriched and extracted DNA from a large volume of samples. Based on this advantage of this system, it can show better clinical sensitivity for the detection of MTB. To determine the optimal method for TB molecular diagnostics, we tested the detection limits of various amplification methods including conventional PCR, quantitative PCR, RPA, and paper-based lateral flow assay (LFA). We used ten-fold serially diluted DNA samples ranging from 1 to 10 8 copies. The products of conventional PCR, quantitative PCR, and RPA were analyzed on a 2% agarose gel. We found that the detection limits of conventional PCR and quantitative PCR were the same (10 2 copies; A). The gene IS6110 was used as a marker of TB, and the gene ERV3 was used as an internal control. Based on this, we drew a standard curve . When the detection limit was calculated for a Ct value of 38, which is the standard set by the company, the resultant standard curve indicated a detection limit of about nine copies for the two genes. On the contrary, RPA has the advantages of desirable isothermal properties and speed, but it has a higher detection limit (10 3 copies). These results are likely to be sufficient for in-field diagnosis using RPA, but the efficiency was slightly reduced ( B). Additionally, for the possibility of an in-field diagnosis of TB, LFA was performed using the RPA product ( C), which exhibited a detection limit similar to that of RPA. Compared with testing of various detection methods with the D-APDMS platform, RPA and LFA methods are still required for improvement of the efficiency. Based on the test results of the detection methods, we decided to proceed with conventional PCR and quantitative PCR, which showed the best efficiency. Although quantitative PCR has the same detection limit as conventional PCR, it does not require additional steps, such as 2% agarose gel electrophoresis, and takes about 1 h, which is shorter than conventional PCR (>2 h). Therefore, quantitative PCR was selected as the detection method for the TB molecular diagnostic system. To validate the clinical utility of the system, we examined 88 sputum samples obtained from patients at four hospitals in the Republic of Korea (Severance Hospital and Asan Medical Center, Dongguk University Ilsan Hospital, Yongin Severance Hospital). The sputum samples were analyzed through the TB molecular diagnostic system consisting of a DNA extraction (D-APDMS with syringe filter) step followed by a DNA detection (quantitative PCR) step. A shows the Ct values obtained. The test result was considered to be negative for Ct values of 38 or higher and positive for Ct values below 38. All samples were tested using the TB molecular diagnostic system, Xpert MTB/RIF, MTB PCR, AFB smear, and mycobacterial culture assays . The clinical samples consisted of 29 TB-positive and 59 TB-negative samples, as determined by results of mycobacterial culture (culture positive and culture negative). TB molecular diagnostic system detected 21 out of 29 samples as true positives and 44 out of 59 samples as true negatives. Thus, the sensitivity and specificity of the assay were 72.41% and 74.58%, respectively. The sensitivities of Xpert MTB/RIF, MTB PCR, and AFB smear were 50%, 27.27%, and 13.79%, respectively. Nevertheless, the specificity was 100% for all TB assays. Of note, the sensitivity of the proposed system was about 300–500% higher than those of the MTB PCR and AFB smear assays. Overall, the data indicated that the sensitivity of the TB molecular diagnostic system was superior to those of conventional TB diagnosis assays. There are several assays available for disease diagnosis. POCT-based diagnosis methods, which are not laboratory-based assays, have been developed. TB diagnosis assays need to be simplified to provide POCT in a real setting. In this work, we report a simplified molecular diagnostic system for TB detection. This system allows rapid and simpler diagnosis of TB compared with the traditional assays. In particular, the sample preparation step was shortened and simplified, resulting in a potential POCT system. The proposed system has several advantages. First, the proposed sample preparation method enables DNA extraction in about 1–2 h depending on the sample volume (up to 10 mL). High amounts of DNA can be extracted from large samples without reducing detection efficiency. This sample preparation system uses non-chaotropic reagents for capturing MTB and extracting DNA without requiring any specific instrument. This system also helps to increase sensitivity to detection by conducting bacteria enrichment and DNA extract using syringes and syringe filters to simplify the sample preparation step. Second, the utility of DNA extraction was confirmed in clinical samples, proving through real-time PCR is a possibility to be applied to clinical on-site. Third, the membrane of MTB is too thick to be disrupted using common lysis buffers. We optimized the lysis buffer using lysozyme to efficiently break the cell membrane. Fourth, compared to AFB smear or Mycobacterial culture assay (takes 2–8 weeks) , this system is a low-cost and rapid-based assay in a limited environment by a simple protocol . It has a high potential to be applied as a POCT method in the future. Fifth, we examined various assays for TB diagnosis. Among those, quantitative PCR was selected as the optimal detection method for TB using specific primers. Although the use of quantitative PCR may not be suitable for POCT, the sensitivity of the whole system including the sample preparation step was three times superior to that of MTB PCR assay. shows the comparison of the sensitivity and specificity indices of the proposed system and other assays presented in this study. Based on these results, further research is needed to overcome the limitations of this study. First, the two techniques, sample preparation and detection, need to be integrated for POC testing with automation. After integration, the operation steps for the system should be minimized. Several hands-on steps are still required for the reaction. Second, a novel detection technique should be investigated for POC testing to replace quantitative PCR. Because quantitative PCR has several limitations; it requires large instruments, takes time, and is expensive, highly limiting its use for POC testing. Third, although the sensitivity of this system was superior to other assays, the specificity should be improved. It is because of the false-positive results in the clinical samples. The reasons for the false-positive results were not controlled. It may be related to the high sensitivity of this system due to the detection of remnant bacilli or DNA from previous infections. Further study is needed to overcome the limitations. Fourth, this was a proof-of-concept study for the system for TB detection. Although the clinical utility of the system was demonstrated by testing 88 clinical samples, this work is not sufficient to conclusively evaluate the performance of the system. Therefore, further studies would be needed for a better examination of the performance of the system with a larger cohort of clinical samples. Fifth, other types of liquid biopsies should be analyzed using this system to examine the versatility of the system. Finally, recent advances in nanotechnology might be investigated for their implications for TB molecular testing in a real setting . Nevertheless, this TB molecular diagnostic system can be useful for TB diagnosis as it provides a rapid and simple assay with high sensitivity and specificity. Therefore, we envision that this system, which combines sample preparation and detection, enables simple and rapid diagnosis of disease with clinical applications for communicable and non-communicable diseases. |
“Trapdoor” Medial Scapula Osteotomy for Resection of a Benign Subscapular Neoplasm | a32c5d32-5557-4977-83e8-cbce95ed34fa | 11822220 | Surgical Procedures, Operative[mh] | A mid-20s left-hand–dominant woman with mosaic NF2 presented with a right posterior chest wall subscapular mass first appreciated in 2014 on imaging. Over a period of 7 years, there was no significant change in mass size or quality. She presented with discomfort in the right shoulder and periscapular region. Her surgical history included resection of a left frontal meningioma, a left vestibular schwannoma, and a schwannoma of the lumbar spine. On examination, there was a palpable mass at the medial border of the scapula. She was neurologically intact with normal range of motion. A magnetic resonance imaging (MRI) demonstrated a T2-hyperintense heterogeneously enhancing mass within the right posterior chest wall. The tumor measured approximately 7.2 × 2.9 × 6.6 cm, deep to the rhomboid major muscle and the medial aspect of the scapula and superficial to the posterolateral ribs (Fig. -A). Surveillance imaging demonstrated minimal interval change in mass size over this 7-year period with clinical discomfort associated with scapulothoracic motion (Fig. -B). After discussing risks and benefits of surgery, a mutual decision to perform an en bloc resection of the tumor was made. Although benign peripheral nerve sheath tumors remained at the top of the differential and there was no clinical or interval radiographic concern for malignant transformation, a biopsy is recommended when assessing a tumor of unknown etiology with malignant or undifferentiated features. To avoid potential shoulder muscle dysfunction from a medial periscapular muscle splitting approach, we elected to perform an osteotomy of the medial border of the scapula with subsequent repair to allow the advantage of bone-to-bone healing and to avoid detaching the rhomboid muscles and minimize the risk of injury to the dorsal scapular nerve. In addition, although it is certainly reasonable to perform the tumor resection by detaching or splitting muscles, the size of the tumor would have necessitated an extensive craniocaudal dissection. The patient was positioned prone. An incision was marked along the medial border of the scapula (Fig. -A) and full-thickness skin flaps with enough exposure to the spine of the scapula (Fig. -B). The trapezius muscle was identified and elevated off the underlying rhomboid musculature. The trapezius insertion on the spine of the scapula was then identified and subperiosteally elevated for later repair. Next, the infraspinatus muscle was recessed 1 cm off the medial border of the scapula which opened a window for the planned osteotomy (Fig. -C). An oscillating saw was used to perform an osteotomy along the medial third of the scapula just distal to the scapular spine (Fig. -A). We spread the osteotomy apart and used electrocautery to free subscapularis fascia off the anterior face of the scapula. Traction sutures and retractors were then placed within the osteotomy site, opening the “trapdoor” (Fig. -B). This provided excellent exposure to the tumor and posterior chest wall below (Figs. -C and ). We dissected the mass from the posterior chest wall and the anterior aspect of the scapula. The tumor appeared to be well encapsulated. Proximal and distal tumor pedicles were identified and stimulated. No electrophysiological responses were noted. The pedicles were ligated, and the tumor removed en bloc. The tumor was then sent to pathology for analysis. The osteotomy was repaired by making drill holes along the 2 edges of the cut bone, and #2 FiberWire suture (Arthrex) was passed through the holes to approximate the osteotomized scapula (Figs. -A and -B). The rhomboid insertion remained undisturbed. The sutures were sequentially tied down for excellent reduction and repair. Towel clips were used to make 2 small bone tunnels at the insertion of the lower trapezius tendon, and #2 FiberWire sutures (Arthrex) were passed through these tunnels and tied to reconcile the lower trapezius tendon to its original point of insertion. The superficial fascia was closed with a 0-Vicryl suture and the skin in a layered fashion (Figs. -C, -D and -E). Postoperatively, the patient made an excellent recovery with minimal pain. She was allowed elbow, wrist, and hand range of motion and placed in a sling for 4 weeks. Passive motion was allowed with the start of active assist range of motion exercises at 4 weeks. The final pathology of the resected tumor was consistent with a hybrid neurofibroma and schwannoma on CD34 (Fig. -A), S100 (Fig. -B), and hematoxylin and eosin (Fig. -C) stains. A postoperative MRI at 6 months demonstrated complete removal of the tumor, and physical therapy was initiated (Fig. -C). At her 1-year follow-up visit, she reported no surgery-related complaints. She had normal shoulder and scapular function and a normal neurological examination with a well-healed incision. Minimal winging of the inferior and medial border of the scapula was noted with extreme abduction of the right upper extremity and without associated pain. At her 3-year follow-up, she has been doing well with no known recurrence, revision, or shoulder dysfunction. Radiographs show a well-healed scapula (Figs. -A and -B). We report a case of a hybrid subscapular neurofibroma and schwannoma tumor in a patient with NF2 resected using a “trapdoor” medial scapula osteotomy. When assessing tumors in NF2 patients, the differential diagnosis should remain broad, including lipoma, neurofibroma, schwannoma, desmoid tumor or elastofibroma and, rarely, sarcoma - . Hybrid characteristics in PNSTs are common, with a review of 31 PNSTs from 14 patients demonstrating 61% of tumors demonstrating hybrid characteristics . Histologically, Feany et al. describe hybrid PNSTs as having abundant collagen, elongated cells with wavy areas of myxoid tissue in the neurofibroma-like area, and closely arranged bundles of Schwann cells with spindle-shaped nuclei in the schwannoma-like portion. In addition, S100 and CD34 immunohistochemical stains were chosen since they are good markers for Schwann cells/melanocytes and neoangiogenesis, respectively, which is helpful for characterizing soft tissue and nerve sheath tumors. Analysis would demonstrate CD34 + , variable S100+ and S100+, and CD34 − in neurofibroma-like and schwannoma-like components, respectively, as shown by the tumor in our case report. We performed a periscapular muscle-sparing osteotomy of the medial scapular border after elevating the trapezius from rhomboids (major and minor), just distal to the scapular spine and recessing the infraspinatus fascia for adequate osteotomy exposure. An advantage to this technique is excellent medial exposure to the subscapular region without detaching the rhomboid or serratus anterior muscles, which are important for scapular retraction and protraction, respectively, elevation, and internal rotation of the scapula alongside other scapular stabilizers (Fig. ) . Rhomboid dysfunction through injury of the dorsal scapular nerve or muscle splitting can cause scapulothoracic dyskinesis, leading to weakness, asymmetric planes of motion, pain, or difficulty with overhead activities such as throwing . Because the osteotomy was distal to the scapular spine, the levator scapulae muscle attachment was preserved as well. Other advantages include bone-to-bone healing, which in the scapula has a reported union rate of up to 99.4% in extra-articular fractures . Disadvantages may include increased technical complexity, secondary iatrogenic or postoperative fracture, malunion, or delay of union in certain nonoptimized hosts. The clinical benefits were made apparent when the patient reported no shoulder dysfunction or pain along with a neurovascularly intact shoulder and unimpaired shoulder abduction on physical examination at long-term follow-up 3 years after the surgery. As a result, our described muscle-sparing osteotomy could be an effective and clinically beneficial surgical technique in certain patients presenting with masses in the medial region of the scapula requiring excision. We describe a case report of a hybrid neurofibroma and schwannoma tumor resection in a patient with NF2 using a medial scapular osteotomy for excision of a medial subscapular mass. It provides excellent exposure to the medial subscapular space, while effectively sparing the insertion of the medial periscapular muscles. This may reduce the risk of postoperative scapular dyskinesis or dorsal scapular nerve injury when compared with other more invasive non–muscle-sparing or muscle-splitting techniques. |
Continuity and volume of bone cement and anti osteoporosis treatment were guarantee of good clinical outcomes for percutaneous vertebroplasty: a multicenter study | 0b0c2008-3a39-4c34-afdf-290ee7f115bd | 11806568 | Surgical Procedures, Operative[mh] | As a public health problem, increasing attention was paid to osteoporosis with the arrival of an aging society. Vertebral compression fractures (VCF) was the most prevalent fragility fracture because of osteoporosis . Conservative treatment consisted of prolonged bed rest, back brace immobilization and medication, increasing the risk of kyphosis, nonunion and death . Percutaneous vertebroplasty (PVP) had unique advantages for VCF, greatly reducing the above-mentioned complications especially for elderly patients . During PVP, bone cement was injected into vertebral body through a small incision (approximately 1 cm) and a special tubular channel. Injected bone cement stabled the fractured vertebral body, effectively relieving the early pain . PVP showed satisfactory clinical outcomes and was widely used, however, there were still part patients suffering from residual or unrelieved pain after the surgery . Previous studies discussed the possible factors related to poor outcomes, however no consensus was gained. In our study, we divided patients in different groups according to the degree of pain relief and identified the risk factors of residual pain. The purpose of our study was to identified the associated factors for good clinical outcomes and provide evidence for surgical strategy. Patients 186 patients who underwent PVP from January 2021 to January 2023 at The Third Hospital of Hebei Medical University and Changzhi People's Hospital in China were reviewed retrospectively in the study. The inclusion criteria included: 1) diagnosed as VCF before surgery according to imaging data; 2) bone mineral density (BMD) equal to or less than -2.5; 3) without neurological symptoms; 4) single segment surgery with bilateral approach; 5) minimum 1-year follow-up visit. The exclusion criteria included: 1) spinal tumors, inflammation or other diseases; 2) combined with other fragility fractures; 3) new-onset VCF from postoperative to follow-up visit; 4) uncompleted data. Data analysis Visual Analogue Scale (VAS) score (range from 0 to 10) was used to assess preoperative and last follow-up pain. The recovery rate was calculated as: (Preoperative VAS—postoperative VAS)/ Preoperative VAS*100%. The patients with last follow-up recovery rate greater than the average were divided into Group Good Clinical Outcomes (Group GCO), while other patients with last follow-up recovery rate less than the average were divided into Group Poor Clinical Outcomes (Group PCO). Preoperative general data including age, gender, body mass index (BMI), bone mineral density (BMD), smoking, drinking, history of trauma or symptoms, followed up period, local kyphosis Cobb angle, lumbar lordosis (LL) and thoracic kyphosis (TK) was recorded for further statistical analysis. BMD was detected by using dual energy X-ray absorptiometry (DEXA). If there was an explicit history of trauma, the time of history was recorded from the onset of the trauma to the surgical day, and if not, the time of history was recorded from onset of the symptoms to the surgical day. local kyphosis Cobb angle was defined as the angle between the upper endplate of the upper vertebral body of the compressed vertebra and the lower endplate of the lower vertebral body of the compressed vertebra. LL was defined as the angle between the upper endplate of L1 vertebral body and the lower endplate of L5 vertebral body. TK was defined as angle between the upper endplate of T4 vertebral body and the lower endplate of T12 vertebral body. The measurement data including local kyphosis Cobb angle, LL and TK was measured three times and the average value was used for statistical testing. Surgical data including surgical segment, surgical time, volume of bone cement, fluoroscopy frequency, standardized treatment for osteoporosis and continuity of bone cement was recorded for further statistical analysis. All surgeries were performed with the assistance of a G-arm fluoroscopy instrument. During the surgery, each posterior-anterior or lateral X-ray was counted as one time of fluoroscopy frequency, in other words, simultaneous posterior-anterior and lateral X-ray was count as two times of fluoroscopy frequency. Continuous bone cement (Fig. ) was defined as there was no gap between two pieces of bone cement according to postoperative posterior-anterior X-ray, while discontinuous bone cement (Fig. ) was defined as there was a visible gap between two pieces of bone cement. Statistical analysis SPSS program (version 27.0; SPSS Inc., Chicago, IL, USA) was used for statistical analysis. P -value < 0.05 was considered statistically significant. Normal data was presented as mean ± standard deviation and non-normal data as median (interquartile range). Quantitative data between Group GCO and Group PCO was tested by Student's t-test or Mann–Whitney U-test according to data distribution. Qualitative date was tested by Chi-square test. Associated factors of good clinical outcomes were identified by multivariate logistic regression analysis with adjusted odds ratios (ORs), 95% confidence intervals (CIs) and P-values. The potential factors were first screened by univariate analysis and the factor with p < 0.10 was selected into the multivariate logistic model. Youden index was calculated as sensitivity + specificity-1, and the maximum Youden index represented the cutoff value. 186 patients who underwent PVP from January 2021 to January 2023 at The Third Hospital of Hebei Medical University and Changzhi People's Hospital in China were reviewed retrospectively in the study. The inclusion criteria included: 1) diagnosed as VCF before surgery according to imaging data; 2) bone mineral density (BMD) equal to or less than -2.5; 3) without neurological symptoms; 4) single segment surgery with bilateral approach; 5) minimum 1-year follow-up visit. The exclusion criteria included: 1) spinal tumors, inflammation or other diseases; 2) combined with other fragility fractures; 3) new-onset VCF from postoperative to follow-up visit; 4) uncompleted data. Visual Analogue Scale (VAS) score (range from 0 to 10) was used to assess preoperative and last follow-up pain. The recovery rate was calculated as: (Preoperative VAS—postoperative VAS)/ Preoperative VAS*100%. The patients with last follow-up recovery rate greater than the average were divided into Group Good Clinical Outcomes (Group GCO), while other patients with last follow-up recovery rate less than the average were divided into Group Poor Clinical Outcomes (Group PCO). Preoperative general data including age, gender, body mass index (BMI), bone mineral density (BMD), smoking, drinking, history of trauma or symptoms, followed up period, local kyphosis Cobb angle, lumbar lordosis (LL) and thoracic kyphosis (TK) was recorded for further statistical analysis. BMD was detected by using dual energy X-ray absorptiometry (DEXA). If there was an explicit history of trauma, the time of history was recorded from the onset of the trauma to the surgical day, and if not, the time of history was recorded from onset of the symptoms to the surgical day. local kyphosis Cobb angle was defined as the angle between the upper endplate of the upper vertebral body of the compressed vertebra and the lower endplate of the lower vertebral body of the compressed vertebra. LL was defined as the angle between the upper endplate of L1 vertebral body and the lower endplate of L5 vertebral body. TK was defined as angle between the upper endplate of T4 vertebral body and the lower endplate of T12 vertebral body. The measurement data including local kyphosis Cobb angle, LL and TK was measured three times and the average value was used for statistical testing. Surgical data including surgical segment, surgical time, volume of bone cement, fluoroscopy frequency, standardized treatment for osteoporosis and continuity of bone cement was recorded for further statistical analysis. All surgeries were performed with the assistance of a G-arm fluoroscopy instrument. During the surgery, each posterior-anterior or lateral X-ray was counted as one time of fluoroscopy frequency, in other words, simultaneous posterior-anterior and lateral X-ray was count as two times of fluoroscopy frequency. Continuous bone cement (Fig. ) was defined as there was no gap between two pieces of bone cement according to postoperative posterior-anterior X-ray, while discontinuous bone cement (Fig. ) was defined as there was a visible gap between two pieces of bone cement. SPSS program (version 27.0; SPSS Inc., Chicago, IL, USA) was used for statistical analysis. P -value < 0.05 was considered statistically significant. Normal data was presented as mean ± standard deviation and non-normal data as median (interquartile range). Quantitative data between Group GCO and Group PCO was tested by Student's t-test or Mann–Whitney U-test according to data distribution. Qualitative date was tested by Chi-square test. Associated factors of good clinical outcomes were identified by multivariate logistic regression analysis with adjusted odds ratios (ORs), 95% confidence intervals (CIs) and P-values. The potential factors were first screened by univariate analysis and the factor with p < 0.10 was selected into the multivariate logistic model. Youden index was calculated as sensitivity + specificity-1, and the maximum Youden index represented the cutoff value. General data of total patients No serious complications were found in 186 patients after PVP. Total 186 patients included 24 males and 162 females and the average age was 70.74 ± 7.87. There were 40 patients underwent T7-10 PVP, 95 patients underwent T11-L2 PVP and 51 patients underwent L3-5 PVP with average follow-up visit 17.40 ± 7.62 months. BMD of all patients was equal or less than -2.5 and the average value was -3.17 ± 0.46. History of trauma was found in 139 patients (74.73%) and the average day of History of trauma or symptoms was 14.88 ± 13.84. The imaging measurement data, including local kyphosis Cobb angle, LL and TK, was 10.15 ± 6.80, 25.41 ± 10.16 and 41.58 ± 12.09, respectively. The average surgical time was 32.31 ± 11.53 min, and the average fluoroscopy frequency was 42.27 ± 13.50 times. The average volume of injected bone cement was 5.17 ± 1.46 ml. Continuous bone cement was found in 117 patients (62.90%). There were 126 patients received standardized anti-osteoporosis treatment (67.74%). Comparison of patient characteristics between Group PCO and Group GCO The preoperative VAS score of total 186 patients was 7.69 ± 1.26 and the score decreased to last follow-up 2.17 ± 1.22 ( p < 0.001). The average recovery rate was 71.92% ± 15.16%. There were 97 patients divided into Group GCO whose recovery rate greater than the average and other 89 patients was divided into Group PCO with recovery rate less than the average. There was no statistical difference in preoperative VAS score ( p = 0.417) between Group PCO (7.64 ± 1.32) and Group GCO (7.74 ± 1.20). In both group, VAS score was significantly decreased after PVP ( p < 0.001). However last follow-up VAS score ( p < 0.001) and recovery rate ( p < 0.001) in Group GCO (1.34 ± 0.68, 83.06% ± 7.69%) was significant greater than that in Group PCO (3.08 ± 1.02, 59.77% ± 11.52%) (Table ). Comparison of other patient characteristics between two groups was showed in Table and no statistical difference was found. Comparison of surgical data between Group PCO and Group GCO Comparison of surgical data between Group PCO and Group GCO was showed in Table . No statistical difference was found in surgical segment ( p = 0.118), surgical time ( p = 0.246) and fluoroscopy frequency ( p = 0.180) between two groups. The average volume of bone cement injected into the vertebral body in Group GCO was 5.43 ± 1.51, which was significantly higher ( p = 0.012) than that in Group PCO (4.88 ± 1.34). There were 75 patients in Group GCO accepted standardized treatment for osteoporosis and the treatment ratio was 77.32%, while that ratio decreased to 57.30% in Group PCO ( p = 0.004). According to the classification method above, bone cement of 70 patients in Group GCO was found continuous on postoperative posterior-anterior X-ray, and the ratio was 70.16%. In Group PCO, bone cement of 47 patients was found continuous and the ratio was only 52.81%. The Bone cement continuity rate was significantly higher in Group GCO than Group PCO ( p = 0.006). Logistic regression analysis and receiver operating characteristic (ROC) curve Logistic regression analysis was used to identify the associated factors of good clinical outcomes after PVP. First, potential factors were selected by univariate analysis and continuity of bone cement ( p = 0.007), standardized treatment for osteoporosis ( p = 0.004) and volume of bone cement ( p = 0.010) were found statistically significant differences. Multivariate logistic regression showed three factors were closely associated with good clinical outcomes, which were continuity of bone cement (OR = 2.237, 95% CI = 1.191–4.201, p = 0.012), standardized treatment for osteoporosis (OR = 2.105, 95% CI = 1.089–4.068, p = 0.027) and volume of bone cement (OR = 1.271, 95% CI = 1.023–1.579, p = 0.030) (Table ). The ROC curve analysis (Fig. ) (Table ) showed moderate accuracy for the association between volume of bone cement and good clinical outcomes (area under the curve: 0.603, P = 0.015). The cutoff value of volume of bone cement was 5.5 ml. No serious complications were found in 186 patients after PVP. Total 186 patients included 24 males and 162 females and the average age was 70.74 ± 7.87. There were 40 patients underwent T7-10 PVP, 95 patients underwent T11-L2 PVP and 51 patients underwent L3-5 PVP with average follow-up visit 17.40 ± 7.62 months. BMD of all patients was equal or less than -2.5 and the average value was -3.17 ± 0.46. History of trauma was found in 139 patients (74.73%) and the average day of History of trauma or symptoms was 14.88 ± 13.84. The imaging measurement data, including local kyphosis Cobb angle, LL and TK, was 10.15 ± 6.80, 25.41 ± 10.16 and 41.58 ± 12.09, respectively. The average surgical time was 32.31 ± 11.53 min, and the average fluoroscopy frequency was 42.27 ± 13.50 times. The average volume of injected bone cement was 5.17 ± 1.46 ml. Continuous bone cement was found in 117 patients (62.90%). There were 126 patients received standardized anti-osteoporosis treatment (67.74%). The preoperative VAS score of total 186 patients was 7.69 ± 1.26 and the score decreased to last follow-up 2.17 ± 1.22 ( p < 0.001). The average recovery rate was 71.92% ± 15.16%. There were 97 patients divided into Group GCO whose recovery rate greater than the average and other 89 patients was divided into Group PCO with recovery rate less than the average. There was no statistical difference in preoperative VAS score ( p = 0.417) between Group PCO (7.64 ± 1.32) and Group GCO (7.74 ± 1.20). In both group, VAS score was significantly decreased after PVP ( p < 0.001). However last follow-up VAS score ( p < 0.001) and recovery rate ( p < 0.001) in Group GCO (1.34 ± 0.68, 83.06% ± 7.69%) was significant greater than that in Group PCO (3.08 ± 1.02, 59.77% ± 11.52%) (Table ). Comparison of other patient characteristics between two groups was showed in Table and no statistical difference was found. Comparison of surgical data between Group PCO and Group GCO was showed in Table . No statistical difference was found in surgical segment ( p = 0.118), surgical time ( p = 0.246) and fluoroscopy frequency ( p = 0.180) between two groups. The average volume of bone cement injected into the vertebral body in Group GCO was 5.43 ± 1.51, which was significantly higher ( p = 0.012) than that in Group PCO (4.88 ± 1.34). There were 75 patients in Group GCO accepted standardized treatment for osteoporosis and the treatment ratio was 77.32%, while that ratio decreased to 57.30% in Group PCO ( p = 0.004). According to the classification method above, bone cement of 70 patients in Group GCO was found continuous on postoperative posterior-anterior X-ray, and the ratio was 70.16%. In Group PCO, bone cement of 47 patients was found continuous and the ratio was only 52.81%. The Bone cement continuity rate was significantly higher in Group GCO than Group PCO ( p = 0.006). Logistic regression analysis was used to identify the associated factors of good clinical outcomes after PVP. First, potential factors were selected by univariate analysis and continuity of bone cement ( p = 0.007), standardized treatment for osteoporosis ( p = 0.004) and volume of bone cement ( p = 0.010) were found statistically significant differences. Multivariate logistic regression showed three factors were closely associated with good clinical outcomes, which were continuity of bone cement (OR = 2.237, 95% CI = 1.191–4.201, p = 0.012), standardized treatment for osteoporosis (OR = 2.105, 95% CI = 1.089–4.068, p = 0.027) and volume of bone cement (OR = 1.271, 95% CI = 1.023–1.579, p = 0.030) (Table ). The ROC curve analysis (Fig. ) (Table ) showed moderate accuracy for the association between volume of bone cement and good clinical outcomes (area under the curve: 0.603, P = 0.015). The cutoff value of volume of bone cement was 5.5 ml. Osteoporosis was a worldwide public health issue with around 200 million people suffering from the disease . Approximately 9 million fractures were closely associated with osteoporosis annually in the world and VCF was the most prevalent among these fractures . VCF was the beginning of a series of fractures which not only reduced quality of life, but also affected the expected lifespan . The prevalence of vertebral fracture increased with age and the incidence rate increased to 21.9% in women over 70 years old . More and more attention was paid to the diagnosis and treatment VCF recent years around the world. Compared with conservative treatment, surgical intervention rapidly released the pain and thus led to more rapid mobilization and reduced the bedridden related complications . The injected bone cement filled the fracture gap, enhanced the strength and stability of vertebral body, which achieved immediate analgesia. Advantages, including small incision, minimal bleeding, short surgical time, fewer surgical complications and satisfactory immediate surgical efficacy, made PVP widely used around the world . However, residual pain in part patients became the main reason of poor clinical outcomes and previous studies had tried to seek for the related factors with no consensus . Previous studies proposed various different types of bone cement distribution and concluded that the distribution was significantly associated with clinical outcomes . The study of Yang et al. showed patients with satisfactory bone cement distribution (bone cement spread from superior to the inferior end plate, from medial cortex of bilateral pedicles, from anterior cortex to posterior third of the vertebral body) complaint less back pain after PVP. Li et al. found confluent cement masses was associated with better clinical outcomes than separated cement masses (isolated and rarely connected to each other), which was similar with the distribution in our study. Mo et al. divide the bone cement distribution into two categories: sufficient group and insufficient group according to whether bone cement diffused in the fracture line and whether vacuum cleft existed. In their study, sufficient obtained better clinical outcomes. Tan et al. found distribution that bone cement contacted both upper and lower endplates led to less postoperative back pain. Xu et al. found if bone cement located on both sides (above and below) of the fracture on postoperative lateral X-ray, patients achieved better clinical outcomes. As a whole, uniformly distributed bone cement was the guarantee of good clinical outcomes after PVP. In our study, continuous bone cement provided uniform force to support fracture line, as a result, fractured vertebrae were more stable, which led to less postoperative back pain. The association between bone cement volume and clinical outcomes was still controversial . Barriga-Martín et al. found small amounts of injected cement obtained similar clinical results to higher amounts which instead, increased the possibility of cement leak and further complications. The study of Wang et al. showed that injected bone cement volume > 4 ml provided good clinical outcomes compared with < 4 ml at both postoperative and last-follow visit. Martinčič et al. suggested minimum volume of injected cement was 4–6 ml, due to stiffness of vertebral body increased as the volume of cement increased until 4–6 ml. Nieuwenhuijse et al. found that the more residual pain patient complaint of, the less volume of bone cement on the CT scans and they proposed a individualized recommended dosage according to spinal segment, sex, fracture severity. Kim et al. suggested injecting as much cement as possible if no leakage occurred. In our study, the correlation showed between volume of bone cement and clinical outcomes, although the result was not strongly significant. More volume of bone cement provided more support and stability. For reducing bone cement leak, we suggested to increase the fluoroscopy frequency appropriately during injecting the bone cement, especially when obvious endplate cracks presented. Our study showed postoperative standardized anti osteoporosis treatment was beneficial for postoperative pain relief. Huang et al. conducted a perspective cohort study that showed the similar results that patients who received postoperative zoledronic acid treatment complaint of less back pain at 12-month follow-up compared with control group. Kong et al. observed that teriparatide treatment after PVP was also benefits to increase BMD and reduce pain. Hu et al. injected Zoledronic Acid intravenously before PVP and found that patients with the combination method had advantages of long-term pain relief, increased bone density and lower risk of refracture. On the other hand, osteoporosis was associated with back pain independently, which significantly reduced the quality of life . Pharmacological treatment for osteoporosis was effective to improve back pain and among drugs, teriparatide showed better effect and more patient satisfaction . To sum up, anti osteoporosis treatment was an essential part for achieving long-term good clinical outcomes after PVP. In our study, there were still 33.26% patients not receiving standardized treatment for osteoporosis due to some reasons. So it is important to strengthen publicity and education on osteoporosis and make more patients benefiting from anti osteoporosis treatment. This study has several limitations. First this study was limited by its retrospective nature. Second, the short follow-up time might lead to biased of results. Third, our study excluded patients with new-onset VCF from postoperative to follow-up visit for more accurate assessment of pain relief after surgery, which might ignore some evaluation data. Therefore, randomized controlled trial with long-term follow-up visit would be performed to verify the results. PVP effectively released the back pain of patients and was worthy of promotion. However, postoperative residual pain was an important factor that reduced the clinical outcomes. Continuous bone cement and standardized treatment for osteoporosis were guarantee of good clinical outcomes for PVP and injected bone cement > 5.5 ml might be a guarantee. |
Patient and family engagement interventions for enhancing patient safety in the perioperative journey: a scoping review | 91ba3f72-a9cd-455c-bac7-b8ebd5043958 | 11836844 | Surgical Procedures, Operative[mh] | Prior research has shown that surgical patients are at 2.3 times higher risk of adverse events, highlighting the potential role of patient and family engagement (PFE) approach in improving patient safety and quality in healthcare. Nonetheless, the current body of literature falls short of providing a holistic understanding of PFE across the entire perioperative process, underscoring the necessity for more in-depth exploration. This study provides a comprehensive mapping of the interventions using PFE approach across various periods of the perioperative journey, highlighting their focus areas, geographical distribution and type of surgical procedure. The findings show that most of the interventions adopted consultation type of PFE approach with fewer using involvement or partnership and shared leadership. In addition, the study reveals a predominance of PFE interventions at the direct care level, particularly in patient information and education, while also identifying a scarcity of interventions targeting organisational and policy-making levels. The study highlights the pressing need for expanded PFE interventions at organisational and policy-making levels, as well as across the entire spectrum of the engagement continuum. Undergoing a surgical intervention is a risk factor for the patient safety, as evidenced by research. A study conducted in Spain across 34 hospitals analysed the prevalence of adverse events and determined that surgical patients have 2.3 times higher risk of suffering from an adverse event. Moreover, surgical adverse events tend to be more severe accounting for 92% of cases of prolonged hospital stay due to an adverse event. In addition, surgical patients show higher prevalence of comorbidities prior to the surgery which further exacerbates the risk of adverse events intrinsic to the complexity of the surgical procedure. However, research has shown that although the prevalence of adverse events in surgical patients is significantly high so is its preventability. In a year, approximately 243 million surgeries are performed worldwide, and medical advances aligned with the technological innovation allow for more surgeries with higher levels of complexity to take place. It is thus paramount to study multilevel interventions aimed at increasing patient safety and quality of care in patients submitted to a surgical procedure. Patient and family engagement (PFE) defined by WHO as ‘the facilitation and strengthening of the role of those using services as coproducers of health, and healthcare policy and practice’ is an approach which has shown positive results on the patient safety and quality in healthcare. PFE contributes to provision of healthcare service more responsive to the patients’ needs, increases quality of healthcare, allows for timely detection of errors or omissions, reduces healthcare-associated infections and decreases adverse events. In addition, PFE allows the patients to take ownership and share responsibility for their care process, contributes to a shared decision-making process in health and consequently has positive impact on patient safety. The influence of PFE on patient safety comes from the understanding that patients and their family members remain the only constant throughout the entire healthcare journey, offering invaluable insights into healthcare process. The WHO Global Patient Safety Action Plan 2021–2030 highlights the importance of PFE approach by dedicating one of its strategic objectives to PFE (ie, strategic objective 4 (SO4) ‘Engage and empower patients and families to help and support the journey to safer healthcare’) reflecting a multilevel strategic and operational commitment. While surgical complexity is recognised as a risk factor for patient safety, the extent to which PFE approach is used in the interventions across the perioperative journey remains inadequately explored. The existing literature has focused on intrahospital infection control, implementation of specific interventions (eg, hand hygiene) or restricted to a specific field of healthcare (eg, nursing). Therefore, the current scoping review aims to comprehensively map the existing interventions with PFE approach focused on improving patient safety across various types of surgical procedures throughout the entire perioperative journey. Furthermore, this review aims to identify the type of PFE approach, and specific activities implemented in the eligible studies. The current scoping review was conducted using the Joanna Briggs Institute updated guidelines for scoping reviews and reported according to Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews . In addition, the review protocol was registered at the Open Science Framework and is available for consultation through the following link: osf.io/hnkj5 . Eligibility criteria Population The current review focused on surgical patients, that is, patients who underwent emergency or elective surgical procedures with hospitalisation or in ambulatory care. No restriction was placed on the type of surgical procedures or anatomic location. In addition, the study included interventions targeted at the surgical patient’s family members, informal caregivers, patient advocates and patient champions. Interventions were restricted to adult surgical patients (≥18 years old) and adult family/informal caregivers of any sex, gender, ethnicity who were receiving perioperative care. Concept The core concepts of this review are interventions in the field of ‘patient safety’ which adopt PFE approach. Interventions eligible for the current review are all the actions listed under the SO4 of the WHO Global Patient Safety Action Plan 2021–2030. Current review adopted WHO definition for ‘patient safety’ that is, ‘a framework of organised activities that creates cultures, processes and procedures, behaviours, technologies, and environments in healthcare that consistently and sustainably: lower risks, reduce the occurrence of avoidable harm, make error less likely and reduce its impact when it does occur’. When referring to PFE approach, this review adopts the definition provided by the Carman et al which describes PFE as an ‘active partnership at various levels across the healthcare system—direct care, organisational design and governance, and policy-making—to improve health and healthcare’. Context The context of the intervention included the entire perioperative care, that is, from the moment when patients are contemplating to undergo surgical procedure until hospital discharge, handover to primary healthcare services or rehabilitation services. No restrictions were placed on the type of healthcare provider/setting or country of implementation. Sources of evidence Sources of evidence were restricted to the articles published in indexed peer-reviewed journals. Quantitative, qualitative and mixed-method types of studies were included, while study protocols and evidence synthesis (eg, systematic reviews, meta-analysis, literature reviews) were excluded. No grey literature was consulted. Further information regarding the eligibility criteria can be found in . Search strategy The search strategy included an initial iterative process of constructing the search query by the first author with the support of a qualified research librarian. In the first stage, a simplified search query was applied in two electronic databases, PubMed and Web of Sciences. The search results were screened by the first author, key terms in the title and abstract of the identified studies were analysed and retrieved. Afterwards, PubMed MeSH thesaurus was consulted to identify broader keywords and an enhanced search query was constructed. The final search query included MeSH thesaurus terms combined with free terms. The search was conducted on five electronic databases (PubMed, Web of Science, SCOPUS, CINAHL and PsycINFO), the query and filters applied were adjusted according to the requirements of each electronic database. The publications were limited to articles published in English, Portuguese and Spanish languages in the last 20 years (ie, 2003–2023). Complete search query and filters applied for individual databases can be consulted in . Study selection The identified records in each electronic database were retrieved in the research information system (RIS) file type and uploaded to CADIMA (ie, a free online software which assists the entire process of evidence synthesis) which merged the RIS files, identified and removed duplicates on confirmation by the first author. Double deduplication process was undertaken by using systematic review accelerator where the second duplicate detection and removal was performed. Afterwards, the unique records were screened for ‘title’ and ‘abstract’ according to the eligibility criteria previously reported. Full-text screening of the eligible records was conducted with the assistance of Zotero citation manager, where the PDF files containing full text were uploaded. Research librarian and corresponding authors were contacted in efforts to obtain full-text PDF files of articles which were not openly accessible. Double-screening method was adopted for the entire screening process, with four authors independently screening the articles (AMA+ASeyfulayeva, BFF+ASeyfulayeva and AShaikh+ASeyfulayeva). Differences among the authors were addressed through meetings where consensus was reached. Data extraction and analysis The data extraction of eligible articles was undertaken using a specifically tailored data charting form in Microsoft Office spreadsheet which was modified throughout the study to address the needs of the data extraction process. The information extracted from the articles included article metadata, population, concept, and context. In addition, all the interventions in the eligible articles were classified using two frameworks: ‘ Multidimensional framework for patient and family engagement in health and healthcare’ by Carman et al , to describe the level and continuum of patient and family engagement. This framework subdivides the interventions into three possible level of actions: ‘direct care’, ‘organisational design and governance’ and ‘policy-making’. In addition, the framework considers that PFE varies according to information flow and decision-making power, representing a continuum of engagement, including ‘consultation’, ‘involvement’ and ‘partnership and sheared leadership’. shows the definitions of each level of action and continuum of PFE as reported by Carman et al . SO4 subactions within the ‘Global Patient Safety Action Plan 2021–2030’ to categorise according to the action level of the intervention, presented in . Data extraction from each article was conducted independently by three authors (BFF+ASeyfulayeva and AShaikh+ASeyfulayeva), discrepancies were addressed via email and meetings. Publication year, country of implementation, type of surgical procedure, levels of engagement and subactions within the SO4 were characterised using absolute and relative frequencies. Population The current review focused on surgical patients, that is, patients who underwent emergency or elective surgical procedures with hospitalisation or in ambulatory care. No restriction was placed on the type of surgical procedures or anatomic location. In addition, the study included interventions targeted at the surgical patient’s family members, informal caregivers, patient advocates and patient champions. Interventions were restricted to adult surgical patients (≥18 years old) and adult family/informal caregivers of any sex, gender, ethnicity who were receiving perioperative care. Concept The core concepts of this review are interventions in the field of ‘patient safety’ which adopt PFE approach. Interventions eligible for the current review are all the actions listed under the SO4 of the WHO Global Patient Safety Action Plan 2021–2030. Current review adopted WHO definition for ‘patient safety’ that is, ‘a framework of organised activities that creates cultures, processes and procedures, behaviours, technologies, and environments in healthcare that consistently and sustainably: lower risks, reduce the occurrence of avoidable harm, make error less likely and reduce its impact when it does occur’. When referring to PFE approach, this review adopts the definition provided by the Carman et al which describes PFE as an ‘active partnership at various levels across the healthcare system—direct care, organisational design and governance, and policy-making—to improve health and healthcare’. Context The context of the intervention included the entire perioperative care, that is, from the moment when patients are contemplating to undergo surgical procedure until hospital discharge, handover to primary healthcare services or rehabilitation services. No restrictions were placed on the type of healthcare provider/setting or country of implementation. Sources of evidence Sources of evidence were restricted to the articles published in indexed peer-reviewed journals. Quantitative, qualitative and mixed-method types of studies were included, while study protocols and evidence synthesis (eg, systematic reviews, meta-analysis, literature reviews) were excluded. No grey literature was consulted. Further information regarding the eligibility criteria can be found in . Search strategy The search strategy included an initial iterative process of constructing the search query by the first author with the support of a qualified research librarian. In the first stage, a simplified search query was applied in two electronic databases, PubMed and Web of Sciences. The search results were screened by the first author, key terms in the title and abstract of the identified studies were analysed and retrieved. Afterwards, PubMed MeSH thesaurus was consulted to identify broader keywords and an enhanced search query was constructed. The final search query included MeSH thesaurus terms combined with free terms. The search was conducted on five electronic databases (PubMed, Web of Science, SCOPUS, CINAHL and PsycINFO), the query and filters applied were adjusted according to the requirements of each electronic database. The publications were limited to articles published in English, Portuguese and Spanish languages in the last 20 years (ie, 2003–2023). Complete search query and filters applied for individual databases can be consulted in . Study selection The identified records in each electronic database were retrieved in the research information system (RIS) file type and uploaded to CADIMA (ie, a free online software which assists the entire process of evidence synthesis) which merged the RIS files, identified and removed duplicates on confirmation by the first author. Double deduplication process was undertaken by using systematic review accelerator where the second duplicate detection and removal was performed. Afterwards, the unique records were screened for ‘title’ and ‘abstract’ according to the eligibility criteria previously reported. Full-text screening of the eligible records was conducted with the assistance of Zotero citation manager, where the PDF files containing full text were uploaded. Research librarian and corresponding authors were contacted in efforts to obtain full-text PDF files of articles which were not openly accessible. Double-screening method was adopted for the entire screening process, with four authors independently screening the articles (AMA+ASeyfulayeva, BFF+ASeyfulayeva and AShaikh+ASeyfulayeva). Differences among the authors were addressed through meetings where consensus was reached. The current review focused on surgical patients, that is, patients who underwent emergency or elective surgical procedures with hospitalisation or in ambulatory care. No restriction was placed on the type of surgical procedures or anatomic location. In addition, the study included interventions targeted at the surgical patient’s family members, informal caregivers, patient advocates and patient champions. Interventions were restricted to adult surgical patients (≥18 years old) and adult family/informal caregivers of any sex, gender, ethnicity who were receiving perioperative care. The core concepts of this review are interventions in the field of ‘patient safety’ which adopt PFE approach. Interventions eligible for the current review are all the actions listed under the SO4 of the WHO Global Patient Safety Action Plan 2021–2030. Current review adopted WHO definition for ‘patient safety’ that is, ‘a framework of organised activities that creates cultures, processes and procedures, behaviours, technologies, and environments in healthcare that consistently and sustainably: lower risks, reduce the occurrence of avoidable harm, make error less likely and reduce its impact when it does occur’. When referring to PFE approach, this review adopts the definition provided by the Carman et al which describes PFE as an ‘active partnership at various levels across the healthcare system—direct care, organisational design and governance, and policy-making—to improve health and healthcare’. The context of the intervention included the entire perioperative care, that is, from the moment when patients are contemplating to undergo surgical procedure until hospital discharge, handover to primary healthcare services or rehabilitation services. No restrictions were placed on the type of healthcare provider/setting or country of implementation. Sources of evidence were restricted to the articles published in indexed peer-reviewed journals. Quantitative, qualitative and mixed-method types of studies were included, while study protocols and evidence synthesis (eg, systematic reviews, meta-analysis, literature reviews) were excluded. No grey literature was consulted. Further information regarding the eligibility criteria can be found in . The search strategy included an initial iterative process of constructing the search query by the first author with the support of a qualified research librarian. In the first stage, a simplified search query was applied in two electronic databases, PubMed and Web of Sciences. The search results were screened by the first author, key terms in the title and abstract of the identified studies were analysed and retrieved. Afterwards, PubMed MeSH thesaurus was consulted to identify broader keywords and an enhanced search query was constructed. The final search query included MeSH thesaurus terms combined with free terms. The search was conducted on five electronic databases (PubMed, Web of Science, SCOPUS, CINAHL and PsycINFO), the query and filters applied were adjusted according to the requirements of each electronic database. The publications were limited to articles published in English, Portuguese and Spanish languages in the last 20 years (ie, 2003–2023). Complete search query and filters applied for individual databases can be consulted in . The identified records in each electronic database were retrieved in the research information system (RIS) file type and uploaded to CADIMA (ie, a free online software which assists the entire process of evidence synthesis) which merged the RIS files, identified and removed duplicates on confirmation by the first author. Double deduplication process was undertaken by using systematic review accelerator where the second duplicate detection and removal was performed. Afterwards, the unique records were screened for ‘title’ and ‘abstract’ according to the eligibility criteria previously reported. Full-text screening of the eligible records was conducted with the assistance of Zotero citation manager, where the PDF files containing full text were uploaded. Research librarian and corresponding authors were contacted in efforts to obtain full-text PDF files of articles which were not openly accessible. Double-screening method was adopted for the entire screening process, with four authors independently screening the articles (AMA+ASeyfulayeva, BFF+ASeyfulayeva and AShaikh+ASeyfulayeva). Differences among the authors were addressed through meetings where consensus was reached. The data extraction of eligible articles was undertaken using a specifically tailored data charting form in Microsoft Office spreadsheet which was modified throughout the study to address the needs of the data extraction process. The information extracted from the articles included article metadata, population, concept, and context. In addition, all the interventions in the eligible articles were classified using two frameworks: ‘ Multidimensional framework for patient and family engagement in health and healthcare’ by Carman et al , to describe the level and continuum of patient and family engagement. This framework subdivides the interventions into three possible level of actions: ‘direct care’, ‘organisational design and governance’ and ‘policy-making’. In addition, the framework considers that PFE varies according to information flow and decision-making power, representing a continuum of engagement, including ‘consultation’, ‘involvement’ and ‘partnership and sheared leadership’. shows the definitions of each level of action and continuum of PFE as reported by Carman et al . SO4 subactions within the ‘Global Patient Safety Action Plan 2021–2030’ to categorise according to the action level of the intervention, presented in . Data extraction from each article was conducted independently by three authors (BFF+ASeyfulayeva and AShaikh+ASeyfulayeva), discrepancies were addressed via email and meetings. Publication year, country of implementation, type of surgical procedure, levels of engagement and subactions within the SO4 were characterised using absolute and relative frequencies. A total of 765 records were identified by applying the search query in 5 electronic databases, 93 records were duplicates, yielding 672 records which have been screened for ‘title’ and ‘abstract’. After double screening of ‘title’ and ‘abstract’, 574 records were excluded while 98 were sought for retrieval. It was not possible to obtain full text of 10 reports, therefore, 4 corresponding authors were contacted via email and professional research librarian was contacted with the request to obtain the missing reports. Five reports were obtained through the research librarian, two reports were obtained from the corresponding authors and three were not retrieved. Out of 95 reports screened for full text, 63 were excluded (reasons for exclusion of each report are in while 32 were deemed eligible for data extraction and analysis . Eligible articles were mainly published in the past 10 years (n=9, 28% between 2015 and 2016 and n=8, 25% between 2021 and 2022). PFE interventions were implemented in 11 different countries, majority originated in the USA (n=13, 41%), followed by the UK (n=6, 19%) and Canada (n=4, 13%). When it comes to the characteristics of the type of surgical patients, predominantly interventions were aimed at patients undergoing ‘multiple/all types’ of surgical procedures, that is, authors did not restrict their intervention to patients undergoing a specific type of surgical procedure (n=15, 47%), 19% (n=6) of the articles reported interventions focused on cardiothoracic surgical patients while 9% (n=3) were focused on gynaecological procedures. reports the summary of the main characteristics of eligible studies and provides detailed data extraction of each eligible article included in this review. Level of action and continuum of PFE A total of 28 studies, representing 88% of patient safety interventions included in the current review, were focused on ‘direct care’. Within ‘direct care’ level majority (n=14, 44%) of the interventions adopted ‘consultation’ type of PFE approach. These interventions focused on health literacy interventions through the provision of written patient information material for postoperative pain management, information booklet and a diary for oncological patients age ≥65 years undergoing major surgery or information regarding hand hygiene for patients during hospitalisation period in the surgical wards. Digital formats of information provision were also identified such as, telenovela regarding kidney transplant process targeting Hispanic patients with the end-stage renal disease on the kidney transplant waiting list or take-home video for prostate cancer patients who will be undergoing robotic-assisted laparoscopic prostatectomy. Two studies in the UK reported the implementation of ‘photo at discharge’ with patients post cardiothoracic surgery use photo of the surgical site and tailored patient information material to prevent surgical site infection. Online patient education courses and mobile application with tailored information for pregnant people undergoing caesarean delivery were also included in the actions with PFE approach included in the ‘direct care’ level of engagement. Within the ‘direct care’ level, 16% (n=5) of the interventions presented intermediate degree of engagement, that is, ‘involvement’. Examples of such interventions include the implementation of ‘Tell Us Card’ tool for hospitalised surgical patients to be used as a way to convey concerns to the healthcare professionals during hospitalisation and mobile applications to support patients in preoperative and postoperative period with patient information material and patient-reported outcomes. Around 28% (n=9) of the interventions in the ‘direct care’ category had the highest type of engagement called ‘partnership and shared leadership’ which focuses on implementing user-friendly informed consent forms and a variety of decision-making tools for example, question prompt lists, tool to aid patients with heart failure to undergo a surgery to place a ventricular device or methods to provide information which aims to enhance the ability of liver transplant patients decision-making process regarding the organ quality. At the ‘organisational design and governance’ (n=2, 6%) and ‘policy-making’ (n=2, 6%) levels the distribution was identical among the continuum of PFE engagement . At the organisational level, the PFE approach was adopted at the ‘consultation’ level in the development of a mobile application to aid postoperative period for cancer patients who underwent a colorectal surgery and at the ‘partnership and shared leadership’ level to develop an OR Black Box—an intraoperative tool which records all the information of the surgical procedure. At the ‘policy-making’ level, patients were engaged in the development of the resumption to work guideline for gynaecological patients and development and validation of the surgical patient safety checklist for surgical patients. Intervention subactions Categorisation of the subactions of the patient safety interventions with the PFE approach indicates that 81%(n=26) were targeted towards the provision of information and education to patients and/or their families. Around 16% (n=5) of the interventions were focused on the ‘codevelopment of policies and programmes’ among which are interventions regarding the enhanced version of informed consent, development of patients’ surgical checklist and return to work guidelines for gynaecological patients. One intervention was related to the action within SO4.3 , related to the establishment of a live donor champion programme for training champion nurses to provide care to live liver donors. For further details regarding the classification of each article according to the level of PFE and WHO SO4 subactions, consult . It is important to note that articles which reported patient engagement in the development of the PFE strategies were identified . However, the intervention classification was solely restricted to the level, continuum and subaction of the developed/implemented intervention itself rather than the involvement in the research process. A total of 28 studies, representing 88% of patient safety interventions included in the current review, were focused on ‘direct care’. Within ‘direct care’ level majority (n=14, 44%) of the interventions adopted ‘consultation’ type of PFE approach. These interventions focused on health literacy interventions through the provision of written patient information material for postoperative pain management, information booklet and a diary for oncological patients age ≥65 years undergoing major surgery or information regarding hand hygiene for patients during hospitalisation period in the surgical wards. Digital formats of information provision were also identified such as, telenovela regarding kidney transplant process targeting Hispanic patients with the end-stage renal disease on the kidney transplant waiting list or take-home video for prostate cancer patients who will be undergoing robotic-assisted laparoscopic prostatectomy. Two studies in the UK reported the implementation of ‘photo at discharge’ with patients post cardiothoracic surgery use photo of the surgical site and tailored patient information material to prevent surgical site infection. Online patient education courses and mobile application with tailored information for pregnant people undergoing caesarean delivery were also included in the actions with PFE approach included in the ‘direct care’ level of engagement. Within the ‘direct care’ level, 16% (n=5) of the interventions presented intermediate degree of engagement, that is, ‘involvement’. Examples of such interventions include the implementation of ‘Tell Us Card’ tool for hospitalised surgical patients to be used as a way to convey concerns to the healthcare professionals during hospitalisation and mobile applications to support patients in preoperative and postoperative period with patient information material and patient-reported outcomes. Around 28% (n=9) of the interventions in the ‘direct care’ category had the highest type of engagement called ‘partnership and shared leadership’ which focuses on implementing user-friendly informed consent forms and a variety of decision-making tools for example, question prompt lists, tool to aid patients with heart failure to undergo a surgery to place a ventricular device or methods to provide information which aims to enhance the ability of liver transplant patients decision-making process regarding the organ quality. At the ‘organisational design and governance’ (n=2, 6%) and ‘policy-making’ (n=2, 6%) levels the distribution was identical among the continuum of PFE engagement . At the organisational level, the PFE approach was adopted at the ‘consultation’ level in the development of a mobile application to aid postoperative period for cancer patients who underwent a colorectal surgery and at the ‘partnership and shared leadership’ level to develop an OR Black Box—an intraoperative tool which records all the information of the surgical procedure. At the ‘policy-making’ level, patients were engaged in the development of the resumption to work guideline for gynaecological patients and development and validation of the surgical patient safety checklist for surgical patients. Categorisation of the subactions of the patient safety interventions with the PFE approach indicates that 81%(n=26) were targeted towards the provision of information and education to patients and/or their families. Around 16% (n=5) of the interventions were focused on the ‘codevelopment of policies and programmes’ among which are interventions regarding the enhanced version of informed consent, development of patients’ surgical checklist and return to work guidelines for gynaecological patients. One intervention was related to the action within SO4.3 , related to the establishment of a live donor champion programme for training champion nurses to provide care to live liver donors. For further details regarding the classification of each article according to the level of PFE and WHO SO4 subactions, consult . It is important to note that articles which reported patient engagement in the development of the PFE strategies were identified . However, the intervention classification was solely restricted to the level, continuum and subaction of the developed/implemented intervention itself rather than the involvement in the research process. The current review identified that interventions with PFE approach aimed at improving patient safety throughout the perioperative journey were focused on ‘direct care’ and predominantly implemented health literacy interventions, that is, interventions targeting SO 4.5 of the WHO Global Patient Safety Action Plan 2021—2030 entitled ‘Information and education to patients and families’. These findings align with those reported by a mixed-method systematic review by Cooper et al , which analysed the operationalisation of the PFE interventions in direct care of the surgical patients who underwent major surgeries and a systematic review on PFE by Park and Giap. In discussing the results, it is important to note that the PFE in the research process itself was identified but not considered for classification of the PFE level, as the focus of the review was the level of engagement in the developed tool. This allowed not to overestimate the level of engagement of the studies eligible for the analysis in this review. The results of the current review align with the systematic review conducted by Park and Giap which reported that most of the eligible studies concentrated on engagement at the ‘direct care’ level, with fewer addressing ‘organisational design and governance.’ and no interventions pertaining to ‘policy-making’. These results mirror the findings of our analysis in the current scoping review, where the primary focus of interventions was on engaging patients in their direct care treatment plans. Similar trends were identified in the study by Cooper et al which reported that 51.7% of the interventions were related to the ‘provision of information’, 20.6% were related to ‘communication’ and 20.7% interventions focused on ‘decision-making’ and ‘action-taking’. Although predominant focus of PFE approach identified in current review was at ‘direct care’ level, two of the studies adopted PFE approach at organisational and policy-making levels. According to the systematic review of qualitative evidence conducted by Merner et al on patients’ engagement in the design, delivery and evaluation of the healthcare services (excluding direct care level), PFE has a positive impact on the participants and healthcare service. Patients who have engaged in group coproduction at the organisational and policy-making levels reported an increased sense of empowerment, confidence, skills and knowledge. Meanwhile, healthcare providers value patients’ unique perspectives, which can enhance care delivery. Furthermore, this systematic review reports with high level of confidence, that PFE improves person-centredness, design, delivery and physical infrastructure of the healthcare services. However, a systematic review by Lowe et al (undertaken complementary to Merner et al ) was inconclusive regarding the impact of the PFE due to lack of high-quality evidence to analyse the impact of PFE. Therefore, based on the benefits outlined by Merner et al and challenges accessing its impact identified by Lowe et al , as well as the scarcity of interventions at organisational and policy-making levels found in the current review, further high-quality research and interventions are needed at these levels to advance PFE in enhancing patient safety, with monitored outcomes for impact and sustainability. The distribution of the interventions according to the subactions of the WHO SO four is similar to the trends reported in the systematic review by Cooper et al . The majority of the interventions show ‘consultation’ level of PFE and with more than 80% of eligible studies focused on SO 4.5 of the WHO Global Patient Safety Action Plan concerning information and education of the patients and their families/caregiver. These interventions take on a variety of formats ranging from verbal, written, digital or multimedia education material. Our findings align with those of the scoping review on the health literacy interventions in surgical context conducted by Jaensson et al . However, given that the patient education depends on timing, quantity, quality and method of the intervention, further research is required to establish guidelines for the development, implementation and monitoring of health literacy interventions concerning patient safety in surgical contexts. These result, concerning the extent of PFE and the focus of interventions suggest that efforts to enhance patient safety often involve one-way communication, where healthcare professionals provide information to patients, families and caregivers without fostering bidirectional information exchange or shared decision-making. Consequently, they question whether this form of PFE approach can be considered patient engagement or rather tokenism of engagement and participation with limited impact on quality, safety and democratisation of the healthcare services. Although a research librarian was involved in the rigorous process of developing a search strategy for electronic databases, it is possible that relevant records were missed. A possible indication of that is the predominant number of interventions which focused on direct clinical and patient interaction, fewer on policy-making which can imply that keywords such as ‘public involvement’ and ‘public’ could have yielded a broader search result. Although the search strategy did not place any restriction on the type of PFE approach, the use of the term ‘public’ versus ‘patient’ could possibly have identified further studies of the engagement at the policy-making level. However, a study conducted by Sypes et al on public involvement in low value care, predominantly used keyword as ‘public involvement’ and yielded overall similar trend as current scoping review, with higher prevalence of intervention focused on direct interaction between patient and physician. Thus, suggesting the under-representation of patients, familiar and caregivers at policy and organisational level is a tendency throughout different healthcare fields. In addition, no grey literature was consulted for this review. This decision may have restricted the breadth of our findings. Grey literature, such as reports and policy papers, often contains valuable insights, particularly regarding PFE interventions at organisational and the policy-making level, which may not be captured in indexed peer-reviewed journals. In addition, Carman et al PFE framework used in this review requires further definition of each group, to allow a more accurate classification of each intervention. The difficulties in classification were discussed by the multidisciplinary team of researchers involved in the screening process and although peer-reviewed, the distribution between the two extremes can be indicative of unclear classification groups. The findings of the current review highlight that PFE interventions targeting patient safety are focused on the direct care level, particularly in the provision of patient information and education. However, a gap in research and interventions concerning organisational and policy-making levels was identified. Therefore, there is a pressing need for interventions that actively involve patients and families in broader organisational and policy-making contexts. 10.1136/bmjoq-2024-002986 online supplemental file 1 10.1136/bmjoq-2024-002986 online supplemental file 2 10.1136/bmjoq-2024-002986 online supplemental file 3 10.1136/bmjoq-2024-002986 online supplemental file 4 10.1136/bmjoq-2024-002986 online supplemental file 5 |
Synergistic effects of PGPRs and fertilizer amendments on improving the yield and productivity of Canola ( | 964fcb90-dc63-441a-89bf-46f19232faa6 | 11730122 | Microbiology[mh] | Oil crops are rich sources of protein, vitamins, dietary fibers, minerals, cooking oil, and other raw materials . Canola ( Brassica napus L.) is a cultivated oil crop species of the Brassicaceae family worldwide . China is among the 3rd largest canola producers in the world, with a yield of 13.1 million metric tons from 2019 to 2020 . Canola is still ranked after soybean in terms of oil production . In Pakistan, canola is cultivated in all provinces over 26.02 thousand hectares, with an annual production of 102 thousand tonnes from 2018 to 19 . Canola plants, ranging from 100 to 150 cm in height, with some varieties reaching 180 cm, have alternate, simple, and lanceolate leaves with a waxy coating and a length of 20–40 cm and exhibit a typical Brassicaceae inflorescence with bright yellow flowers arranged in a race, allowing both self-pollination and cross-pollination . The plants produce slender pods that are 5–10 cm long, contain 10–20 spherical seeds that are 1.5–2.5 mm in diameter and brown or black, and have an erect main stem with 2–4 branching levels and a diameter of 1–2 cm . Hence, the taproot system together with lateral roots and extending to a depth of 60–120 cm bears the whole plant in the soil . Plant growth promoting rhizobacteria (PGPR) are bacteria that inhabit the rhizosphere of plants; their beneficial effects include improved nutrient acquisition and root modification through plant hormone control. The use of PGPR in agriculture enhances crop yields and reduces pollution, hence enhancing ecological and economic security . PGPRs are essential to sustainable agriculture since they contribute to providing a healthier and more productive food system . Through the process of biomass pyrolysis, biochar has several uses, with the principal uses being as follows: This material has several uses, with the most important one being as follows: In agriculture, biochar has been largely used to increase soil fertility, promote plant germination, and provide crops with nutrients. They asserted that the documented outcomes marked the intervention as efficient; therefore, it yielded positive results . Thus, it enhances farming productivity and, consequently, increases the results of agricultural activities . Compost is a product of the natural process of decomposing natural organic material, which is characterized by humus and valuable nutrients that ameliorate all aspects of the physical, chemical, and biological properties of soils . Compost enhances the ability of the soil to hold other nutrients since it increases the CEC of the soil and supplies fundamental nutrients such as nitrogen, phosphorus, and potassium to the plants . When compost is incorporated into the soil, nutrient holding and nutrient delivery capacity are increased, resulting in improved plant health . The residues of animal faces, urine together with plant material, and manure are important for farming because they provide nutrients for plants and increase the quality of the organic matter and tillage of the soil . The use of immature and composted manure on croplands improves the conditions of the upper soil layer and its ability to hold water and nutrients, increases microbial activity, and supplies vital minerals and plant nutrients . By using manure as a natural fertilizer, farmers can help soil and crops grow healthily and naturally . It is a fulfilling source of nutrients, especially nitrogen, and hence quickly becomes a favorite fertilizer for nitrogen-deficient soils and is even richer in nitrogen, phosphorous, and potassium than all other animal manures . The products produced from pelletized chicken manure may contain additional nutrients; therefore, it is regarded as an effective fertilizer for encouraging the production of green leaves in plants . Animal droppings, such as poultry manure, are rich in nitrogen, phosphorus, potassium, and other nutrients that help in both the growth and development of plant crops . The problem of low-productivity canola in Pakistan is, therefore, widely embodied by several factors . Global warming and an increase in weather variability affect canola outputs, and unpredicted weather events lead to poor germination; low soil health and poor topsoil formation through erosion and/or acidification erode soil fertility and reduce the water holding capacity . Poor water supply and water use have compounded this problem, resulting in a scarcity of water and poor plant growth . The seeds that are used are often of low quality and are mostly obtained from farmer’s fields, while the seed treatment is often poor, which leads to low germination rates and poor seedling development . Unfavorable growing conditions such as planting at the wrong time, planting at the wrong density, and providing poor weed control for pests and diseases are possible . Furthermore, poor uptake and distribution of fertilizers and low nutrient status in the soil result in poor plant growth and development . To address these factors, this two-year research study aimed to determine the effects of the integrated application of biochar, compost, poultry manure, animal manure, and chemical fertilizer with PGPRs on canola yield and other traits . Thus, considering the cumulative interaction of these treatments, this study revealed the best interactions that are beneficial for increasing canola yield, improving soil health, and promoting the adoption of sustainable agriculture in Pakistan . The key objectives of this study were to evaluate the synergistic effects of Azotobacter salinestris and Bacillus subtilis combined with various organic and inorganic fertilizers on the yield and productivity of canola and to assess their impact on soil health and nutrient efficiency. The hypothesis was that the integration of PGPRs with fertilizer amendments would significantly enhance canola yield, agronomic traits, and soil quality compared to fertilizers alone. Experimental site The area of study included the research area of the College of Agriculture, Bahauddin Zakariya University (BZU), Bahadur Subcampus Layyah; geographically, it is situated at 30° 58′ 49′′ N latitude and 70° 57′ 57′′ E longitude in southern Punjab, Pakistan, at an altitude of 147 m above the mean sea level. The type of soil identified was sandy with 0. A total of 56% organic matter was present, and the pH was slightly basic and equal to 8.2 . They also described the soil of Layyah as sandy loam, which provides moderate drainage capacity and holds a moderate amount of standing water; hence, it can support several crops . The soil pH was 7.1 is slightly below the alkaline level, which influences nutrient solubility and microbiological processes occurring in the context of soil. Hence, the sandy loam soil of Layyah, which was found to be chemically basic in constitution, needs wise inputs to increase its fertility and yield. Experimental layout and inputs A real experimental setting was established during the past two years (2022–2023, 2023–2024) by adopting a randomized complete block design (RCBD) with two factorials with three replications and 21 plots in each replication, resulting in a total of sixty-three plots. Every plot was 3 m × 1 m, the row-to-row distance was 40 cm, the plant-to-plant distance was 10 cm, and HC-022B hybrid canola was used. The seeds of HC-022B hybrid canola were sourced from Punjab Certus Seed Kanzo Combagro Evyol Group, which specializes in high-yield canola hybrids. Azotobacter salinestris and Bacillus subtilis cultures for PGPR treatments were obtained from Ayub Agricultural Research Institute, Faisalabad. The experimental design employed a 3 × 7 factorial arrangement comprising two factors. The factors that were targeted by the experiment included the level of plant growth-promoting rhizobacteria (PGPR) (Factor A) and the aspect of fertilizer amendments (Factor B). Factor A consisted of three PGPR levels: PGPR0 received 0 PGPR, PGPR1 was inoculated with Azotobacter salinestris , and PGPR2 was inoculated with Bacillus subtilis Factor B comprised seven fertilizer amendment treatments: the control, no fertilizers or amendments added, fully recommended fertilizers (FFs), half recommended fertilizers (HFs), biochar (BC), compost (CP), poultry manure (PM), and animal manure (AM). The seedbed was prepared with 2 tons of biochar per hectare, 3 tons of compost per hectare, 2 tons of poultry manure per hectare, and 2 tons of animal manure per hectare. Seeds were immersed in PGPRs, Azotobacter salinestris and Bacillus subtilis B. The crop was planted on 3rd November 2021 at a rate of 0.81 kg/ha, and weed management was performed manually. The crop was treated with the recommended full dose of 140:55:40 kg/ha N: P:K and half of the recommended rate. Pest management was performed by applying multinational brands containing imidacloprid and bifenthrin at 600 ml/ha according to established guidelines and protocols . Thinning was carried out at 20 and 35 days after sowing for proper plant spacing, irrigation was recommended, and other agronomic practices were followed. The crop was allowed to mature and be harvested on 28 April 2022, spread on a clean surface area for sun-drying for 10 consecutive days, and then threshed to estimate the grain yield. The above treatments were repeated in the second year of the experiment via similar methods of agronomic practices, harvesting, and data collection. The specific dosages of biochar (2 t/ha), compost (3 t/ha), poultry manure (2 t/ha), and animal manure (2 t/ha) were selected based on their nutrient profiles and prior agronomic recommendations for sandy loam soils like those at the experimental site. These dosages were optimized to enhance soil fertility, structure, and microbial activity, creating favorable conditions for canola growth and maximizing the benefits of PGPR applications. The seeds were immersed in freshly prepared cultures of Azotobacter salinestris and Bacillus subtilis at a concentration of 10⁸ CFU/mL for 30 min to ensure thorough coating. This method of inoculation was employed to enhance the adhesion of PGPRs to the seed surface, facilitating early colonization of the rhizosphere post-germination. This approach aims to optimize nutrient uptake, promote root development, and improve overall plant growth through the synergistic effects of the PGPRs. Data collection The following parameters were measured and recorded in this experiment: plant height, number of primary branches, number of secondary branches, pod length, number of pods per plant, number of seeds per pod, biological yield, and grain yield, which provide a clear picture of the growth and yield of the plants. Randomized sampling was used to measure parameters of plant growth and productivity . Agronomic data from each plot were collected via random sampling, with five plants per plot within a 1 m × 1 m area, plant height, measured from the soil surface to the highest point of the ear via a meter rod. The average height was obtained for all the plots. Five plants from each plot were randomly chosen to record and measure primary and secondary branches, pods, and pod length. Thus, the average value was determined for each parameter. To assess the biological yield, whole plants from a 1 m × 1 m quadrat from each plot were weighed, whereas to estimate the grain yield, plants from that same area were cut and threshed, and the grains were weighed . Thus, the average values of both parameters were determined. This sampling design made it easy to classify and obtain a representative sample of the plant growth and productivity parameters . These parameters were employed to measure the impacts of various treatments on plant growth and yield. The experiments were conducted thrice to reduce the effect of errors and increase the reliability of the results. Statistical analysis The quantitatively collected data were subjected to two-way analysis of variance (ANOVA) with a randomized complete block (RCB) design with a factorial arrangement. All the statistical tests were conducted via Statistix 8.1 software. A post hoc test with the least significant difference (LSD) test was used to analyze significant differences in the treatment means at a probability level of 0. 05, which was significant on the basis of the obtained F test value. The area of study included the research area of the College of Agriculture, Bahauddin Zakariya University (BZU), Bahadur Subcampus Layyah; geographically, it is situated at 30° 58′ 49′′ N latitude and 70° 57′ 57′′ E longitude in southern Punjab, Pakistan, at an altitude of 147 m above the mean sea level. The type of soil identified was sandy with 0. A total of 56% organic matter was present, and the pH was slightly basic and equal to 8.2 . They also described the soil of Layyah as sandy loam, which provides moderate drainage capacity and holds a moderate amount of standing water; hence, it can support several crops . The soil pH was 7.1 is slightly below the alkaline level, which influences nutrient solubility and microbiological processes occurring in the context of soil. Hence, the sandy loam soil of Layyah, which was found to be chemically basic in constitution, needs wise inputs to increase its fertility and yield. A real experimental setting was established during the past two years (2022–2023, 2023–2024) by adopting a randomized complete block design (RCBD) with two factorials with three replications and 21 plots in each replication, resulting in a total of sixty-three plots. Every plot was 3 m × 1 m, the row-to-row distance was 40 cm, the plant-to-plant distance was 10 cm, and HC-022B hybrid canola was used. The seeds of HC-022B hybrid canola were sourced from Punjab Certus Seed Kanzo Combagro Evyol Group, which specializes in high-yield canola hybrids. Azotobacter salinestris and Bacillus subtilis cultures for PGPR treatments were obtained from Ayub Agricultural Research Institute, Faisalabad. The experimental design employed a 3 × 7 factorial arrangement comprising two factors. The factors that were targeted by the experiment included the level of plant growth-promoting rhizobacteria (PGPR) (Factor A) and the aspect of fertilizer amendments (Factor B). Factor A consisted of three PGPR levels: PGPR0 received 0 PGPR, PGPR1 was inoculated with Azotobacter salinestris , and PGPR2 was inoculated with Bacillus subtilis Factor B comprised seven fertilizer amendment treatments: the control, no fertilizers or amendments added, fully recommended fertilizers (FFs), half recommended fertilizers (HFs), biochar (BC), compost (CP), poultry manure (PM), and animal manure (AM). The seedbed was prepared with 2 tons of biochar per hectare, 3 tons of compost per hectare, 2 tons of poultry manure per hectare, and 2 tons of animal manure per hectare. Seeds were immersed in PGPRs, Azotobacter salinestris and Bacillus subtilis B. The crop was planted on 3rd November 2021 at a rate of 0.81 kg/ha, and weed management was performed manually. The crop was treated with the recommended full dose of 140:55:40 kg/ha N: P:K and half of the recommended rate. Pest management was performed by applying multinational brands containing imidacloprid and bifenthrin at 600 ml/ha according to established guidelines and protocols . Thinning was carried out at 20 and 35 days after sowing for proper plant spacing, irrigation was recommended, and other agronomic practices were followed. The crop was allowed to mature and be harvested on 28 April 2022, spread on a clean surface area for sun-drying for 10 consecutive days, and then threshed to estimate the grain yield. The above treatments were repeated in the second year of the experiment via similar methods of agronomic practices, harvesting, and data collection. The specific dosages of biochar (2 t/ha), compost (3 t/ha), poultry manure (2 t/ha), and animal manure (2 t/ha) were selected based on their nutrient profiles and prior agronomic recommendations for sandy loam soils like those at the experimental site. These dosages were optimized to enhance soil fertility, structure, and microbial activity, creating favorable conditions for canola growth and maximizing the benefits of PGPR applications. The seeds were immersed in freshly prepared cultures of Azotobacter salinestris and Bacillus subtilis at a concentration of 10⁸ CFU/mL for 30 min to ensure thorough coating. This method of inoculation was employed to enhance the adhesion of PGPRs to the seed surface, facilitating early colonization of the rhizosphere post-germination. This approach aims to optimize nutrient uptake, promote root development, and improve overall plant growth through the synergistic effects of the PGPRs. The following parameters were measured and recorded in this experiment: plant height, number of primary branches, number of secondary branches, pod length, number of pods per plant, number of seeds per pod, biological yield, and grain yield, which provide a clear picture of the growth and yield of the plants. Randomized sampling was used to measure parameters of plant growth and productivity . Agronomic data from each plot were collected via random sampling, with five plants per plot within a 1 m × 1 m area, plant height, measured from the soil surface to the highest point of the ear via a meter rod. The average height was obtained for all the plots. Five plants from each plot were randomly chosen to record and measure primary and secondary branches, pods, and pod length. Thus, the average value was determined for each parameter. To assess the biological yield, whole plants from a 1 m × 1 m quadrat from each plot were weighed, whereas to estimate the grain yield, plants from that same area were cut and threshed, and the grains were weighed . Thus, the average values of both parameters were determined. This sampling design made it easy to classify and obtain a representative sample of the plant growth and productivity parameters . These parameters were employed to measure the impacts of various treatments on plant growth and yield. The experiments were conducted thrice to reduce the effect of errors and increase the reliability of the results. The quantitatively collected data were subjected to two-way analysis of variance (ANOVA) with a randomized complete block (RCB) design with a factorial arrangement. All the statistical tests were conducted via Statistix 8.1 software. A post hoc test with the least significant difference (LSD) test was used to analyze significant differences in the treatment means at a probability level of 0. 05, which was significant on the basis of the obtained F test value. Canola plant height, the number of primary branches, the number of secondary branches, the number of pods per plant, the number of seeds per plant, biological yield, and grain yield were significantly influenced by two-way interactions, PGPRs, and fertilizer amendments in year 1. However, pod length was not affected by the treatment effects due to the two-way interaction effect between PGPRs and fertilizer amendments in the year-1 experiment, as indicated in Table . Thus, when the experiment was repeated in year 2, the plant height, number of primary branches, number of secondary branches, number of pods per plant, pod length, biological yield, and grain yield were highly significantly influenced by the two-way interaction between the PGPRs and the fertilizer amendments. However, the number of seeds per pod was significantly influenced by the two-way interaction between PGPRs and fertilizers, as depicted in Table . Effects on plant height In year 1, the average plant height of canola was observed to be a maximum of 166 cm by the combined application of PGPR2 ( Bacillus subtilis ) with the fully recommended fertilizer, followed by 165 cm due to the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest value of 132 cm was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The plant height was 23.5% and 23.03% greater than that in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . Similarly, in year 2, the average plant height of canola was observed to be a maximum of 169 cm by the combined application of PGPR2 ( Bacillus subtilis ) with the fully recommended fertilizer, followed by 167 cm due to the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest value of 130 cm was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The plant height was 23.07% and 22.15% greater than that in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . Effects on primary branches In year 1, the average number of primary branches of canola was 8.1 when PGPR2 ( Bacillus subtilis ) was combined with the fully recommended fertilizer, followed by 7.5 when biochar combined with PGPR2 ( Bacillus subtilis ) was applied. The lowest value of 3 was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The primary branches were significantly 62.96% and 60% greater than those in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K @140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . Similarly, in year 2, the average number of primary branches of canola was 9.1 when PGPR2 ( Bacillus subtilis ) was combined with the fully recommended fertilizer, followed by 8.53 when biochar combined with PGPR2 ( Bacillus subtilis ) was applied. The lowest value of 4 was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The average number of primary branches was 56.04% and 53.10% greater than that in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . Effects on secondary branches In year 1, the average number of secondary branches of canola was observed to be a maximum of 17 by the combined application of PGPR2 ( Bacillus subtilis ) with the fully recommended fertilizer, followed by 15 due to the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest value of 3 was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The primary branches were significantly greater (82.35% and 80% higher, respectively) than those in the control treatment due to the combined application of the fully recommended fertilizer (N: P:K @140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . Similarly, in year 2, the average number of secondary branches of canola was a maximum of 18 following the combined application of PGPR2 ( Bacillus subtilis ) with the fully recommended fertilizer, followed by 16 following the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest value of 4 was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The average number of secondary branches was 77.77% and 75% greater than that in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . Effects on the number of pods In year 1, the average number of pods per plant of canola was observed to be a maximum of 304 by the combined application of PGPR2 ( Bacillus subtilis L.) with the fully recommended fertilizer, followed by 285 due to the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest 135 was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The number of pods was significantly greater (55.59% and 52.63%) than that in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . Similarly, in year 2, the average number of pods per plant of canola was observed to be a maximum of 310 by the combined application of PGPR2 ( Bacillus subtilis ) with the fully recommended fertilizer, followed by 290 due to the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest value of 140 was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The number of pods was 54.83% and 51.72% greater than that in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . Effects on the number of seeds per pod In year 1, the average number of seeds per pod of canola was observed to be a maximum of 17.33 when PGPR2 ( Bacillus subtilis ) was combined with the fully recommended fertilizer, followed by 16.66 when biochar combined with PGPR2 ( Bacillus subtilis ) was applied. The lowest value of 6 was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The number of seeds per pod was significantly (65.37% and 63.98%) greater than that in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . Similarly, in year 2, the average number of seeds per pod of canola was observed to be a maximum of 18.33 when PGPR2 ( Bacillus subtilis ) was combined with the fully recommended fertilizer, followed by 17.67 when biochar combined with PGPR2 ( Bacillus subtilis ) was applied. The lowest value of 7 was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The number of seeds per pod was significantly greater (61.81% and 60.38%) than that in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . Effects on pod length In year 1, the average pod length of canola was observed to be a maximum of 9.16 cm by the combined application of PGPR2 ( Bacillus subtilis ) with the fully recommended fertilizer, followed by 8.5 cm due to the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest value of 3.66 cm was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The average pod length was 60.04% and 56.94% greater than that in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . Similarly, in year 2, the average pod length of canola was observed to be a maximum of 9.5 cm by the combined application of PGPR2 ( Bacillus subtilis ) with the fully recommended fertilizer, followed by 9 cm due to the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest value of 4 cm was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The pod length was 57.89% and 55.55% greater than that in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . Effects on biological yield In year 1, the average biological yield of canola was observed to be a maximum of 8.9 t/ha when PGPR2 ( Bacillus subtilis ) was combined with the fully recommended fertilizer, followed by 8.43 t/ha when biochar combined with PGPR2 ( Bacillus subtilis ) was applied. The lowest value of 4.5 t/ha was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The average biological yield was significantly (49.43% and 46.61%) greater than that of the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Fig. . Similarly, in year 2, the average biological yield of canola was observed to be a maximum of 8.97 t/ha with the combined application of PGPR2 ( Bacillus subtilis ) with the fully recommended fertilizer, followed by 8.6 t/ha with the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest value of 4.4 t/ha was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The biological yield was 50.94% and 48.83% greater than that of the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Fig. . Effects on grain yield In year 1, the average grain yield of canola was observed to be a maximum of 4.6 t/ha by the combined application of PGPR2 ( Bacillus subtilis ) with the fully recommended fertilizer, followed by 4.4 t/ha due to the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest value of 2.46 t/ha was detected in the control treatment, where no PGPR or fertilizer amendments were applied. The average grain yield was significantly 46.52% and 44.09% greater than that of the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Fig. . Similarly, in year 2, the average grain yield of canola was observed to be a maximum of 4.7 t/ha by the combined application of PGPR2 ( Bacillus subtilis ) with the fully recommended fertilizer, followed by 4.5 t/ha due to the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest value of 2.55 t/ha was detected in the control treatment, where no PGPR or fertilizer amendments were applied. The grain yield was 45.74% and 43.33% greater than that of the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of 2 t/ha biochar with PGPR2 ( Bacillus subtilis ), as shown in Fig. . In year 1, the average plant height of canola was observed to be a maximum of 166 cm by the combined application of PGPR2 ( Bacillus subtilis ) with the fully recommended fertilizer, followed by 165 cm due to the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest value of 132 cm was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The plant height was 23.5% and 23.03% greater than that in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . Similarly, in year 2, the average plant height of canola was observed to be a maximum of 169 cm by the combined application of PGPR2 ( Bacillus subtilis ) with the fully recommended fertilizer, followed by 167 cm due to the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest value of 130 cm was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The plant height was 23.07% and 22.15% greater than that in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . In year 1, the average number of primary branches of canola was 8.1 when PGPR2 ( Bacillus subtilis ) was combined with the fully recommended fertilizer, followed by 7.5 when biochar combined with PGPR2 ( Bacillus subtilis ) was applied. The lowest value of 3 was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The primary branches were significantly 62.96% and 60% greater than those in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K @140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . Similarly, in year 2, the average number of primary branches of canola was 9.1 when PGPR2 ( Bacillus subtilis ) was combined with the fully recommended fertilizer, followed by 8.53 when biochar combined with PGPR2 ( Bacillus subtilis ) was applied. The lowest value of 4 was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The average number of primary branches was 56.04% and 53.10% greater than that in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . In year 1, the average number of secondary branches of canola was observed to be a maximum of 17 by the combined application of PGPR2 ( Bacillus subtilis ) with the fully recommended fertilizer, followed by 15 due to the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest value of 3 was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The primary branches were significantly greater (82.35% and 80% higher, respectively) than those in the control treatment due to the combined application of the fully recommended fertilizer (N: P:K @140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . Similarly, in year 2, the average number of secondary branches of canola was a maximum of 18 following the combined application of PGPR2 ( Bacillus subtilis ) with the fully recommended fertilizer, followed by 16 following the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest value of 4 was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The average number of secondary branches was 77.77% and 75% greater than that in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . In year 1, the average number of pods per plant of canola was observed to be a maximum of 304 by the combined application of PGPR2 ( Bacillus subtilis L.) with the fully recommended fertilizer, followed by 285 due to the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest 135 was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The number of pods was significantly greater (55.59% and 52.63%) than that in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . Similarly, in year 2, the average number of pods per plant of canola was observed to be a maximum of 310 by the combined application of PGPR2 ( Bacillus subtilis ) with the fully recommended fertilizer, followed by 290 due to the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest value of 140 was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The number of pods was 54.83% and 51.72% greater than that in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . In year 1, the average number of seeds per pod of canola was observed to be a maximum of 17.33 when PGPR2 ( Bacillus subtilis ) was combined with the fully recommended fertilizer, followed by 16.66 when biochar combined with PGPR2 ( Bacillus subtilis ) was applied. The lowest value of 6 was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The number of seeds per pod was significantly (65.37% and 63.98%) greater than that in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . Similarly, in year 2, the average number of seeds per pod of canola was observed to be a maximum of 18.33 when PGPR2 ( Bacillus subtilis ) was combined with the fully recommended fertilizer, followed by 17.67 when biochar combined with PGPR2 ( Bacillus subtilis ) was applied. The lowest value of 7 was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The number of seeds per pod was significantly greater (61.81% and 60.38%) than that in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . In year 1, the average pod length of canola was observed to be a maximum of 9.16 cm by the combined application of PGPR2 ( Bacillus subtilis ) with the fully recommended fertilizer, followed by 8.5 cm due to the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest value of 3.66 cm was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The average pod length was 60.04% and 56.94% greater than that in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . Similarly, in year 2, the average pod length of canola was observed to be a maximum of 9.5 cm by the combined application of PGPR2 ( Bacillus subtilis ) with the fully recommended fertilizer, followed by 9 cm due to the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest value of 4 cm was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The pod length was 57.89% and 55.55% greater than that in the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Table . In year 1, the average biological yield of canola was observed to be a maximum of 8.9 t/ha when PGPR2 ( Bacillus subtilis ) was combined with the fully recommended fertilizer, followed by 8.43 t/ha when biochar combined with PGPR2 ( Bacillus subtilis ) was applied. The lowest value of 4.5 t/ha was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The average biological yield was significantly (49.43% and 46.61%) greater than that of the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Fig. . Similarly, in year 2, the average biological yield of canola was observed to be a maximum of 8.97 t/ha with the combined application of PGPR2 ( Bacillus subtilis ) with the fully recommended fertilizer, followed by 8.6 t/ha with the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest value of 4.4 t/ha was observed in the control treatment, where no PGPR or fertilizer amendments were applied. The biological yield was 50.94% and 48.83% greater than that of the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Fig. . In year 1, the average grain yield of canola was observed to be a maximum of 4.6 t/ha by the combined application of PGPR2 ( Bacillus subtilis ) with the fully recommended fertilizer, followed by 4.4 t/ha due to the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest value of 2.46 t/ha was detected in the control treatment, where no PGPR or fertilizer amendments were applied. The average grain yield was significantly 46.52% and 44.09% greater than that of the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of biochar with PGPR2 ( Bacillus subtilis ), as shown in Fig. . Similarly, in year 2, the average grain yield of canola was observed to be a maximum of 4.7 t/ha by the combined application of PGPR2 ( Bacillus subtilis ) with the fully recommended fertilizer, followed by 4.5 t/ha due to the application of biochar with PGPR2 ( Bacillus subtilis ). The lowest value of 2.55 t/ha was detected in the control treatment, where no PGPR or fertilizer amendments were applied. The grain yield was 45.74% and 43.33% greater than that of the control treatment because of the combined application of the fully recommended fertilizer (N: P:K@140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ), followed by the application of 2 t/ha biochar with PGPR2 ( Bacillus subtilis ), as shown in Fig. . The use of compost, plant residue mulch, manures, and cover crops increases the content of carbon in the soil and helps improve the soil, which is considered sustainable agriculture and soil conservation . These organic fertilizers help in the sequestration of atmospheric carbon in the soil, increase the fertility of the soil, and further subsidize the ecosystem services of the soil, thereby increasing the strength of the soil and its sustenance . This is indeed the case for plant growth-promoting rhizobacteria (PGPR), which, in the context of sustainable agriculture, support crop productivity and optimize plant nutrients . These PGPRs may act via nitrogen fixation, solubilization of phosphorus, production of indole acetic acid, and other processes via phytohormone production . Thus, PGPR-based inoculants have proven to be potential biotechnological tools for increasing soil fertility and plant productivity, which can be powerful methods for providing food security and rendering agroecosystems sustainable . When used in crops such as legumes, cereals, vegetables, and other crops, PGPRs are vital, particularly under changing climate regimes and the practice of sustainable agriculture . PGPRs are viewed as an influential tool for changing modern agriculture because they provide an opportunity to develop eco-friendly strategies for crop management . Hence, the adoption of PGPR technology makes it possible to develop a healthy food chain for future generations. PGPRs increase the ability of plants to combat diseases and reduce the frequency of watering needed. To some extent, the application of PGPR in agricultural practices can increase the achievement of sustainable development objectives. Bacillus subtilis produces and releases bioactive compounds, such as B vitamins, nicotinic acid, pantothenic acid, biotin, and heteroxins, and promotes the growth of plants; gibberellin is needed to stimulate the formation of the root system . Additionally, Bacillus subtilis solubilizes inorganic and organic phosphorus, a characteristic of efficient free-living nitrogen-fixing bacteria . Notably, several studies suggest that Bacillus species can facilitate plant growth through auxin production, independent of nitrogen fixation . In this study, we observed that the combined application of fully recommended fertilizers (N: P:K @140:55:60) kg/ha with PGPR2 ( Bacillus subtilis ) increased the plant height, number of primary and secondary branches, number of pods, number of seeds per pod, biological yield, and grain yield of canola. This occurred because Bacillus subtilis , which enhances the bioavailability of nutrients, reduces the need for synthetic fertilizers, enhances the soil structure, increases water retention, promotes beneficial microbial communities, and produces auxin (indole-3-acetic acid, IAA), a phytohormone essential for plant growth and development . Auxin promotes root elongation; increases root hair and lateral root formation; enhances nutrient uptake; and plays a central role in cell division, elongation, fruit development, and senescence, initiating root, leaf, and flower development . Similarly, Iqbal et al. noted that the coinoculation of plant growth-promoting rhizobacteria with fertilizers increases the growth and yield of canola. Additionally, Bacillus subtilis inhibits the growth of plant pathogens and pests; thus, control measures that require the use of chemicals are limited . The present work supports Martínez et al. , who concluded that Bacillus subtilis improves the physical properties of the soil, increases water availability, and stimulates the growth of healthy microorganisms. Research has also revealed that, through the inoculation of PGPRs, crop yields increase, plant health increases, and soil fertility increases. Furthermore, it inhibits plant pathogens and pests and therefore has chemical control mechanisms . The use of PGPRs has increased the yield, health, and fertility of crops in the areas where they are used . The increase in the height of the plant, primary and secondary branches, number of seeds per pod, pod length, and biological and grain yields of canola were also found to reach a maximum when biochar was applied at a rate of 2 t/ha along with PGPR2 ( Bacillus subtilis ) after treatment (fully recommended fertilizer with Bacillus subtilis ). This was due to the synergistic effects of PGPR with biochar because biochar application enhances the oxidation‒reduction reactions of the soil matrix, improving soil fertility . Biochar has received much interest as a soil amendment because it is less expensive to produce, and although it is a carbon source, it also improves the chemical properties of the soil. The general use of biochar has been well documented to alter the physical and chemical characteristics of the soil, thus improving plant growth and yield . These benefits are expected, mainly because biochar enhances water and nutrient retention, enhances the cohesiveness and porosity of the soil, enhances nutrient uptake by plants, and stimulates microbial activities in the soil . However, general alterations for specific sort properties and conditions may be positive, although there are always exceptions, and they could even be neutral or slightly negative within certain situations. These findings are supported by Nagah et al. , who noted that these increased microorganisms also help counter abiotic stress conditions such as drought and salinity, in addition to eradicating the need for chemical fertilizers and pesticides. Bacillus species are also effective at increasing human and soil carbon stocks, decreasing the emission of greenhouse gases, decreasing soil erosion, and increasing the quality of supplied water . The authors of Artyszak & Gozdowski also reported that the application of Bacillus subtilis positively influences the growth of plants not only through nitrogen fixation but also through the promotion of root growth, increased mineral absorption and pathogenic fungal and bacterial inhibition. The amendment of biochar positively affects the stability of the structural components of soil, such as its aggregates, solids, and organic matter, which are favorable for plant growth . The various sizes of particles in the biochar also help increase the water holding capacity and aeration of healthy soil . Additionally, biochar can address poor structure by increasing porosity and soil aeration, especially in compacted ecosystems . Notably, while fine sandy soil has a greater surface area and porosity than biochar does, biochar is a good soil amendment. Biochar mixed with composted biomass also has a positive effect; the quickly decayed biomass provides a stable nitrogen source to plants until nutrients are slowly released from the biochar. Additionally, biochar has a long residence time in the environment and soil and is thus a long-term solution to soil amendments . Lalay et al. reported that in dry agro-environmental settings, biochar (BC) and plant growth-promoting microorganisms (PGPR) may be useful agronomical methods for reducing the effects of drought. Tian et al. reported that in a variety of soil types, biochar has the ability to significantly increase upland crop output and nitrogen usage efficiency (NUE) . Like synthetic fertilizers, organic fertilizers are safer and better for the environment because they are not centralized. Owing to their efficiency in the farming process, they are nonpoisonous and therefore suitable for sustainable farming. In association with the bioeconomy, plant growth-promoting rhizobacteria have numerous positive effects on agriculture. This is crucial for many of the commercially valuable monoculture crops because soil alterations are required for enhanced germination, yields, and disease tolerance. This two-year study assessed the use of PGPRs together with organic and inorganic fertilizers to increase canola productivity and production. Two PGPR strains, Azotobacter salinestris and Bacillus subtilis , were combined with biochar, compost, animal manure, poultry manure, and NPK fertilizer. The full recommended dose of N: P:K (140:55:40 kg/ha) combined with Bacillus subtilis enhanced the production of canola as well as other agronomical characteristics. Compared with the control and other treatments, this combination enhanced most aspects of plant growth and nutrient status; thus, the yields were greater. Furthermore, the present investigation revealed that the interaction of B. subtilis and biochar at a rate of 2 tons/ha significantly enhanced canola yield and quality traits. Biochar benefits the microbial quantity and quality of the soil, structure, and nutrient control; this encourages crop-producing capacity and better crop quality. All these strategies provide enduring methods for increasing canola productivity and production. These treatments are capable of increasing the quality of crop production and, consequently, increasing the yields of canola producers, increasing the benefits to producers. On the basis of these findings, we recommend treating canola seeds with Bacillus subtilis in combination with either the full recommended dose of N: P:K (140:55:40 kg/ha) or biochar at a rate of 2 t/ha. The above strategies offer positive, long-term approaches to increasing canola production. Canola farmers on the same note may benefit from these treatments since they improve crop performance and, consequently, yield and quality of produce are improved. In conclusion, extending PGPRs via the correct combination of organic and inorganic additions may improve canola production. This study is concerned with the premise that these combinations could highlight ways that may deem farming practices sustainable for growers and the environment. The recommendation to combine chemical fertilizers with PGPRs and organic amendments can align with sustainable agriculture if managed carefully. Using chemical fertilizers at recommended levels alongside organic amendments reduces dependency on chemicals, supports soil health, and enhances nutrient efficiency. This approach can ensure high productivity while minimizing the negative impact on soil microorganisms and reducing environmental harm, thus supporting long-term sustainability. However, excessive reliance on chemical fertilizers may still harm soil biology, so balanced use is crucial. |
Quality of orthodontic care in an academic setting in the Middle East | 7b1b5401-1407-4ccc-ac7e-5df9192d2a83 | 11772880 | Dentistry[mh] | Evaluating the effectiveness of orthodontic treatment and providing high-quality care to patients requires a thorough assessment of treatment needs, difficulties, and outcomes. Occlusal indices hold a significant importance across various aspects of orthodontics, including resource allocation, and the establishment of treatment standards . Indices of treatment outcome also serve as a means of evaluating the quality of orthodontic care provided, thereby facilitating enhancements in education, research, and audit , . Information from these indices also offers valuable perceptions of the profiles of practitioners and healthcare systems prevailing within a nation . Consequently, validated and reliable indices, such as the Peer Assessment Rating (PAR), American Board of Orthodontics Objective Grading System (ABO-OGS), and Index of Complexity Outcome and Need (ICON), have been widely used worldwide to assess standards of orthodontic care – . These indices have become integral in assessing the standards of orthodontic care in both research and clinical practice. The PAR index is a widely utilized measure to assess orthodontic treatment outcomes by evaluating dental occlusion. It incorporates weighted scores for several occlusal components including alignment, buccal occlusion, overjet, overbite, and midline discrepancy. PAR enables an objective assessment of the severity of malocclusion and the improvement following treatment. The ABO-OGS is an assessment tool developed by the American Board of Orthodontics to objectively evaluate treatment quality by focusing on specific criteria, such as alignment, marginal ridges, buccolingual inclination, and overjet. The ICON is a comprehensive index designed to assess the complexity of malocclusion, predict treatment need, and evaluate outcomes. It combines five weighted components (esthetic assessment, upper arch crowding or spacing, crossbite, overbite/open bite, and buccal occlusion) to provide a single score and helps in determining treatment priority and assessing effectiveness. The effectiveness of orthodontic treatment has been examined in a range of settings, including state-funded hospitals , – , private practices , – , educational programs , , and among practitioners with varying levels of expertise such as specialists and residents . In graduate orthodontic programs, residents often provide a high standard of clinical care , with reports showing that approximately half of the resident-treated cases meet the requirements for board certification . The orthodontic outcomes for resident-treated cases are reported to be similar to those of board-certified practitioners . In addition, the orthodontic outcomes for patients treated by residents and specialists in the same setting were not found to be significantly different . However, when the orthodontic treatment outcomes of university orthodontic clinics were compared to private practices, some studies showed no differences , , whereas others reported suboptimal occlusal results for cases treated in a university setting . Although individual variations exist, the average duration of treatment with fixed orthodontic appliances lasts for approximately 2 years . Malocclusion characteristics, patient-related, and operator-related factors can contribute to longer orthodontic treatment duration , . An increase in orthodontic treatment duration is associated with problems such as greater cost , and a prolonged need for lifestyle modifications . Additionally, the likelihood of root resorption , , gingival recession , and enamel decalcification also increases with longer orthodontic treatment duration. It is unclear whether treatment duration varies based on the treatment setting, as some reports suggest a longer treatment duration in private practices , while others indicate significant increases in treatment duration in educational settings . Considering duration is also important when assessing the quality of orthodontic care provided, as it helps to evaluate the overall success of the treatment. Studies evaluating the quality of orthodontic care provided in academic settings in the UAE are sparse . Therefore, this study aims to fill this gap by evaluating the quality of treatment provided by residents in an educational institute in the UAE using multiple indices, while simultaneously investigating possible correlations between treatment duration and patient- and operator-related variables. Ethical considerations This study was approved by the Mohammed Bin Rashid University—Institutional Review Board (MBRU IRB-2022-163). All research was performed in accordance with relevant guidelines/regulations of the institute. General consent to use the data for scientific purposes was obtained from the patients and/or their legal guardians at the time of registration. Due to the retrospective nature of the study, the Mohammed Bin Rashid University—Institutional Review Board waived the need of obtaining informed consent. Sample size Calculations using software (G*Power, Ver. 3.1.9.7) to determine the study sample size indicated that a sample size of 200 subjects was needed to represent the population for a power of 80% and a significance level of 0.05. Inclusion and exclusion criteria Patients who completed comprehensive orthodontic treatment with fixed appliances, with high-quality pre- and post-treatment records (photographs, dental casts with no chipping or breakage of any teeth and dental panoramic radiographs with no technical or patient-related artifacts), and a minimum retention time of six months were included in the study. Patients who received limited or adjunctive orthodontic treatment, with craniofacial syndromes, or with incomplete records were excluded. Resident and faculty profile The three year orthodontic residency program has a yearly intake of six residents (total = 18). The clinical faculty includes both full-time (n = 4) and part-time (n = 2) members, with a cumulative experience of over fifty years. Dataset The pre- and post-treatment orthodontic records of patients who underwent comprehensive orthodontic treatment with fixed appliances (22 slot, Roth prescription, stainless steel, American Orthodontics Mini Master ® brackets; Sheboygan, USA) between 2015 and 2022 were included in this study. Information regarding sex, age at the start of treatment, category of malocclusion, whether extractions were a part of treatment, duration of active treatment, number of missed appointments as well as number of residents treating were obtained from the electronic dental records of the hospital. All patient information was de-identified before being made available to the investigators. Measurement indices and calibration The PAR, ABO-OGS and ICON were the three indices used in this study. Prior to commencing the study, the two investigators involved in the measurements (KG and AT) participated in multiple calibration sessions . Randomly selected casts (n = 25) were scored independently by two investigators (KG and AT) to assess inter-observer reliability, and the measurements were repeated after a two-week interval on the same set of casts to assess intra-observer reliability. Data analysis Data was analyzed in SPSS version 28 (IBM, SPSS Inc., Chicago, IL, USA). Descriptive data were collected and summarized from orthodontic records. The Shapiro–Wilk test was used to assess data normality. Spearman’s rank correlation was used to measure the strength and direction of association between the patient- and treatment-related variables. Mann–Whitney and Kruskal–Wallis tests were used for categorical variables. Statistical significance was set at p < 0.05 for all analyses. To create a visual representation of the distribution by nationality for those patients who underwent orthodontic treatment in this study, a spreadsheet application (Microsoft Excel 2022, https://www.microsoft.com/en-us/microsoft-365/excel ) was utilized. This study was approved by the Mohammed Bin Rashid University—Institutional Review Board (MBRU IRB-2022-163). All research was performed in accordance with relevant guidelines/regulations of the institute. General consent to use the data for scientific purposes was obtained from the patients and/or their legal guardians at the time of registration. Due to the retrospective nature of the study, the Mohammed Bin Rashid University—Institutional Review Board waived the need of obtaining informed consent. Calculations using software (G*Power, Ver. 3.1.9.7) to determine the study sample size indicated that a sample size of 200 subjects was needed to represent the population for a power of 80% and a significance level of 0.05. Patients who completed comprehensive orthodontic treatment with fixed appliances, with high-quality pre- and post-treatment records (photographs, dental casts with no chipping or breakage of any teeth and dental panoramic radiographs with no technical or patient-related artifacts), and a minimum retention time of six months were included in the study. Patients who received limited or adjunctive orthodontic treatment, with craniofacial syndromes, or with incomplete records were excluded. The three year orthodontic residency program has a yearly intake of six residents (total = 18). The clinical faculty includes both full-time (n = 4) and part-time (n = 2) members, with a cumulative experience of over fifty years. The pre- and post-treatment orthodontic records of patients who underwent comprehensive orthodontic treatment with fixed appliances (22 slot, Roth prescription, stainless steel, American Orthodontics Mini Master ® brackets; Sheboygan, USA) between 2015 and 2022 were included in this study. Information regarding sex, age at the start of treatment, category of malocclusion, whether extractions were a part of treatment, duration of active treatment, number of missed appointments as well as number of residents treating were obtained from the electronic dental records of the hospital. All patient information was de-identified before being made available to the investigators. The PAR, ABO-OGS and ICON were the three indices used in this study. Prior to commencing the study, the two investigators involved in the measurements (KG and AT) participated in multiple calibration sessions . Randomly selected casts (n = 25) were scored independently by two investigators (KG and AT) to assess inter-observer reliability, and the measurements were repeated after a two-week interval on the same set of casts to assess intra-observer reliability. Data was analyzed in SPSS version 28 (IBM, SPSS Inc., Chicago, IL, USA). Descriptive data were collected and summarized from orthodontic records. The Shapiro–Wilk test was used to assess data normality. Spearman’s rank correlation was used to measure the strength and direction of association between the patient- and treatment-related variables. Mann–Whitney and Kruskal–Wallis tests were used for categorical variables. Statistical significance was set at p < 0.05 for all analyses. To create a visual representation of the distribution by nationality for those patients who underwent orthodontic treatment in this study, a spreadsheet application (Microsoft Excel 2022, https://www.microsoft.com/en-us/microsoft-365/excel ) was utilized. Normality tests revealed that data were normally distributed. Patients (n = 201) included in the study were from over forty different nationalities (Fig. ), a majority of which were females (Table ). The majority of patients seeking treatment presented with Class II malocclusion, followed by Class I and Class III malocclusions (Table ). The mean age at the start of orthodontic treatment was 19.8 ± 10.5 years (12–53 years). Orthodontic treatment was performed on a non-extraction basis in most patients (Table ). Most patients received orthodontic care from a single resident and a small minority received care from more than two residents. The average duration of orthodontic treatment in this study was 816 ± 376.4 days with the mean number of visits to the orthodontic department being 28.8 ± 12.6, following a 3–4 weekly recall schedule. The average number of missed appointments was 3.3 ± 4.9 visits. The inter- and intra-rater correlations for all indices were high (r ≥ 0.981), indicating excellent reliability. Assessment of treatment outcomes revealed significant improvements across all three measurement indices used in this study. The mean reduction in PAR score was 84.5%. PAR improvement scale and PAR score on a nomogram are outlined in Table and Fig. respectively for the patients included in this study. The ABO-OGS grade had a mean score of 14.8, with the majority classified as satisfactory (Table ). Similarly, for the ICON scores, a reduction (30.3 points) was seen from the mean pre-treatment (42.1 points) to the post-treatment (11.8 points) with an acceptable orthodontic treatment outcome seen in all patients. Case complexity, as assessed by the ICON, revealed that a sizable proportion of patients seeking treatment were in the mild to moderate categories (Table ). Treatment needs, as assessed by the ICON, revealed that 45.8% of the subject’s required treatment. The correlations between orthodontic treatment duration and various patient- and operator-related factors investigated in this study are shown in Table . No correlation was observed between the malocclusion category (p = 0.121) or treatment modality (extraction/non-extraction) (p = 0.163) and treatment duration. However, a moderate positive correlation (r = 0.572, p < 0.001) was observed between the number of treating residents and treatment duration. Age at the start of treatment showed a mild positive correlation (p < 0.018, r = 0.165) with the treatment duration. Similarly, the number of missed appointments were also positively correlated (p < 0.001, r = 0.671) with treatment duration. This study reports original data on the outcomes of orthodontic treatment provided by residents within an educational setting in the UAE. The orthodontic outcomes of patients treated by residents within an educational setting have been previously documented but were outside the geographical region of this study , . Previous studies in the same geographical region investigated orthodontic treatment needs in a large sample of adolescents , and examined orthodontic treatment outcomes . Although the previous assessment of treatment outcomes was carried out in the same setting as in this study, the sample size was only one-tenth of the present study. In addition to the large sample size, the strengths of the current study also include the generalizability of the sample and use of multiple indices. Although this was a single-center study, the results are representative of diverse patient demographics (Fig. ) owing to the multiethnic nature of the UAE population . At present, there is no universal assessment tool to evaluate orthodontic treatment outcomes. In this study, three indices were utilized, each serving to complement the other. For instance, the PAR index evaluates improvements in occlusion, establishes treatment standards , , and offers a detailed view of a single component, but falls short in assessing final outcomes . The ABO-OGS index, on the other hand, quantifies finishing quality but fails to account for treatment complexity and needs , . Lastly, the ICON index assesses pre-treatment need and complexity in a more rigorous manner and incorporates an aesthetic standard in addition to the occlusal component , . Comparing scores among the aforementioned indices has also been done in the past , , . The simultaneous use of three occlusal indices in this study enabled a comprehensive evaluation of the treatment results. Additionally, in this study, measurements of orthodontic study casts were used rather than those derived from lateral cephalograms, which are prone to measurement errors . Advanced digital methods, including intraoral scanning and automated model analyses may also be incorporated in future studies. The study limitations should also be considered. First, as this was a single-centre study, it is likely that the orthodontic treatment protocols followed were homogenous, thereby limiting the diversity of modalities available for patient treatment. The orthodontist’s choice of techniques and appliances can have varying effects on tooth movement with implications on treatment duration. Furthermore, the study focused on treatment assessment indices that are specifically of concern to clinicians, without assessing patients’ perceptions, satisfaction, and quality of life . Also, only patients with complete records were included in the study, leading to a possible selection bias. Lastly, both assessors who evaluated the treatment outcomes were from the same academic setting, which may have introduced institutional biases that could potentially affect the outcome measures . However, to mitigate this potential limitation, standardized measurement indices and a rigorous calibration process were utilized to ensure consistency in the data collection. The PAR index was used to assess the pre- and post-treatment occlusal status of the patients in this study. The mean initial PAR score (19.3 ± 12.4) values in this study were similar to those reported from postgraduate training centers by Elshafee et al . and Firestone et al . . On the other hand, the final PAR score (2.1 ± 4.7) in this study was relatively lower than postgraduate training centers in Europe , , but comparable to the centers in the United States . The percentage reduction in the PAR score (84.59%) in this study was similar to that reported in a postgraduate training center in the United States . The reported reduction in PAR score percentage ranges from 63 to 78% and includes diverse settings such as hospital orthodontic services , public group practice , and post-graduate training centers , . Only 5% of the cases (Fig. ) were in the "worse/no different" category, suggesting a high standard of care provided by the residents. It should also be noted that the majority (61.2%) of orthodontic cases had an initial PAR score of less than 22 points. When the pre-treatment PAR score was higher, a trend towards longer treatment was also observed, similar to previously published literature by Birkeland et al . . ABO-OGS was also used to evaluate treatment outcomes and assess the quality of orthodontic treatment. Only 27 patients (13.4%) had unsatisfactory results, again indicative of a relatively high standard of care provided by the residents during their postgraduate training. The ABO-OGS score in this study (14.8 ± 9.8) was within the cutoff values reported in a multicenter study from China . In this study, no significant correlation between treatment duration and the ABO-OGS score was observed. Interestingly, the ABO-Discrepancy Index (DI) has been reported to be useful for predicting orthodontic treatment time . In addition to the ABO-OGS index, the ABO-DI is also a part of the ABO certification process. However, the focus of the ABO-DI is to quantify the starting difficulty and recognize clinician ability to treat cases of varying complexity. The ICON was used to assess the complexity and the need for orthodontic treatment in this study. The international cutoff for orthodontic treatment needs using the ICON index is set at 43, but there are differing views (ICON cutoff of 52), which are population-specific . Of the subjects seeking orthodontic treatment, 45.8% required treatment based on the pretreatment ICON score (> 43 points). Although the physical attributes of the ICON score are common, the subjectivity in perception of the esthetic component could possibly explain the variation observed. The mean ICON score obtained in this study was (42.1 ± 21.3) which was lower than a Norwegian study reported by King et al . (54.9) and a UK based study by Koochek et al . (69.0) respectively. It is also important to note that there is no waitlist for patients seeking orthodontic care in the UAE compared to other countries where orthodontic triage as a means of improving the appropriateness of referrals is common practice . The ICON results showed that all the patients in this study had acceptable results. This might be due to the fact that most of the cases had a low pre-treatment ICON score, in comparison to studies reporting (12–29%) unacceptable results , . The post-treatment ICON score (11.8 ± 4.5) was slightly lower than the value (15.8) reported by a general practice-based study in the UK . Interestingly, problems associated with treating complex malocclusions in an educational setting have been highlighted in the past . There was a significant difference between the ICON case complexity categories in terms of orthodontic treatment duration, indicating that complex cases took longer to treat. Overall, in this study, orthodontic treatment achieved a good treatment outcome, with a significant proportion of patients showing great or substantial improvement. Without doubt, the outcome of orthodontic treatment is also influenced by patient compliance, including attendance at follow-up appointments , . In this study, a higher number of missed appointments was associated with longer treatment duration (Table ), concurring with the findings reported by Kiyamehr et al . . However, it must also be borne in mind that the study duration coincided with the social restrictions imposed by the pandemic . A certain percentage of missed appointments in this study may have been due to the institution cancelling patient appointments to abide by local health authority regulations to prevent overcrowding in the clinics. However, the literature on the duration of orthodontic treatment due to the recent pandemic is inconclusive, as both increased and no differences have been reported. However, the reduced and irregular appointments during the pandemic did not have any effect on PAR score improvement , which is similar to what we observed. No discernible impact on the length of orthodontic treatment was seen in this study on the basis of whether treatment was performed with or without extractions (Table ), similar to the findings of Pariskou et al . , but contrary to that of Holman et al. . Patient- and operator-related factors that contribute to longer treatment durations have been identified , . Factors such as sex and malocclusion category did not have any relationship with orthodontic treatment duration in this study. However, orthodontic treatment duration was observed to be longer in complex malocclusions, younger subjects, when patients missed appointments, and when multiple residents were involved in the treatment. Given the three-year duration of orthodontic resident training, patients are frequently transferred to junior residents during treatment which was also the case in this study. Residents tend to focus on patients who are progressing well in treatment resulting in finishing those cases by the time they graduate, and the patients who are not progressing as expected are often on the transfer list of the subsequent cohort of residents , . Diminished clinical outcomes are also seen in graduate orthodontic programs, particularly when the active treatment time is long . This study found that when multiple residents were involved in providing care, treatment duration was longer. This is consistent with multiple retrospective studies that found significant increases in treatment duration resulting from changes in the operator across settings, including a teaching environment , state-funded hospital , and even private practice . Interestingly, an increase in the treatment duration in an educational setting did not have a relationship with clinical outcomes . The findings of this study indicated favorable outcomes regarding the quality of orthodontic care delivered by residents in an academic environment. The duration of orthodontic treatment requires the establishment of fresh benchmarks to enhance clinical care. |
Plasma NOTCH3 and the risk of cardiovascular recurrence in patients with ischemic stroke | 84929dfb-6649-4779-a9d6-eab0ce9ff82b | 11760494 | Biochemistry[mh] | Ischemic stroke patients are confronted with the alarming possibility of experiencing another cardiovascular event. Epidemiological data indicates that approximately one in four patients may encounter a recurrent cardiovascular event within the first 5 years after a stroke or transient ischemic attack (TIA), even when diligently adhering to existing preventive treatment recommendations. Previous studies have explored the specific biological reasons for their susceptibility to cardiovascular recurrence. In the Genotype Recurrence Risk of Stroke (GRECO) study, genetic variability in the MGP gene was associated with vascular recurrence in a Spanish population. Additionally, genetic variants on chromosomes 12p13 and 9p21.3 showed strong correlations with stroke survival and recurrence in a Chinese population. Epigenetic studies have implicated variability in TRAF3 regulation as a potential contributor to vascular recurrence. Moreover, increased levels of lipoprotein (a), lipoprotein-associated phospholipase A 2 activity and copeptin have been shown to elevate the likelihood of cardiovascular recurrence following a stroke or TIA. Metabolomic studies have associated medium-chain acylcarnitines and lysophosphatidylcholine (LysoPC[16:0]) with the risk of stroke recurrence. In preclinical studies, the release of brain-derived alarmins and stimulation of the systemic immune response were found to promote endothelial inflammation and atheroprogression after stroke. These findings provide support for the existence of a biological basis underlying cardiovascular recurrence in ischemic stroke patients. Previous investigations, although informative, have not utilized proteomics to elucidate the underlying biological mechanisms of cardiovascular recurrence in ischemic stroke patients. We hypothesized that ischemic stroke patients, particularly those susceptible to recurrent cardiovascular events, may possess distinct protein signatures that could predispose them to such recurrences. To overcome the potential masking effect of high-abundant proteins in biological samples, which could hinder the detection of disease-relevant biomarkers present at lower concentrations, we conducted quantitative proteomics on plasma-derived microvesicles from prospectively recruited ischemic stroke patients. Through pathway analysis, we identified candidate biomarkers, which were subsequently validated independently in a separate stroke cohort. These biomarkers were then subjected to rigorous evaluation of their biological significance concerning stroke pathogenesis and cardiovascular recurrence, utilizing animal stroke and atherosclerosis models. This comprehensive approach aimed to shed light on the molecular mechanisms underpinning the propensity for cardiovascular recurrence in ischemic stroke patients. Clinical Cohort From January to December 2016, we conducted a prospective recruitment of consecutive patients diagnosed with acute ischemic stroke at the National University Hospital, Singapore. The diagnosis was established based on corroborative history-taking, neurological examination, and neuroimaging. Comprehensive information on demographics, risk factors, stroke severity, and mechanisms was collected. Stroke severity was assessed using the National Institutes of Health Stroke Scale (NIHSS). To investigate stroke mechanisms, angiography (CT or MRI), echocardiogram and 24-h electrocardiogram investigations were performed. Based on the results of these investigations, patients were categorized into different stroke subtypes using the Trial of Org 10172 in Acute Stroke Treatment (TOAST) criteria, including large artery disease, cardioembolism, small artery disease, undetermined causes and other causes. Exclusion criteria included patients below 21 years old, pregnant individuals, those with intracranial hemorrhage, active cancer and autoimmune diseases. The primary endpoint of the study was the occurrence of a composite outcome comprising cerebrovascular events (non-fatal stroke and transient ischemic attack [TIA]) and coronary artery events (non-fatal acute myocardial infarction, unstable angina and fatal myocardial infarction). Follow-up information on outcomes was collected through direct or telephone interviews with patients, family members, or caregivers, until the occurrence of an event or until July 2019, whichever came earlier. A blinded committee of investigators adjudicated the study outcomes against medical records. Verification of all causes of death was performed using the Registry of Birth and Death, Ministry of Home Affairs, Singapore. As a comparison group, we recruited age- and sex-matched healthy individuals without vascular diseases. Blood samples were collected from the participants using heparin tubes through venepuncture. The blood was then immediately processed to obtain plasma and peripheral blood mononuclear cells (PBMC) (see for additional details). Microvesicles were isolated from the plasma samples by first de-fibrinating them using thrombin, creating a fibrin-free serum-like fraction without clotting factors. Subsequently, the fibrin-free samples were enriched for microvesicles through sequential ultracentrifugation. All the collected samples were stored at the Tissue Repository Unit of our hospital. To gain insights into the potential mechanisms of the identified candidate protein, we measured plasma levels of NT-brain natriuretic peptide (NT-proBNP), interleukin-6, S100β, cortisol and insulin. These proteins target various aspects of stroke pathogenesis and were analyzed using the Roche e-411 analyzer (Roche Diagnostics, Switzerland). The study was approved by the Domain-Specific Review Board, National Healthcare Group and was carried out in accordance with the principles of the Declaration of Helsinki and Good Clinical Practice. All participants provided written informed consent after receiving detailed explanations by trained individuals. Quantitative Proteomic Analysis Ischemic stroke patients were separated into two cohorts, namely the ‘Derivation’ and ‘Validation’ cohorts ( and ). In the derivation phase, microvesicles were isolated from the plasma of 48 stroke patients, with 24 patients experiencing cardiovascular recurrence (Event+) and 24 patients without recurrence (Event−) . The sample size of 24 subjects per outcome group was predetermined to achieve a significance level of 10 −5 with 80% statistical power, considering a Cohen effect size of 1.8. Representative electron microscopy images of microvesicles isolated from plasma are shown in . For proteomic analysis, four biologically independent replicates were selected for each Event+ and Event− group, with each replicate pooled from microvesicles of six patients. The pooled microvesicles from equal volumes of plasma underwent reduction, alkylation and digestion using Lys-C and trypsin. A tandem Mas Tag (TMT)-based quantitative proteomics method was employed to analyze microvesicles and compare protein expressions between the Event+ and Event− groups . Candidate protein targets were identified by examining their potential involvement in cerebral and cardiac vasculature using the Protein ANalysis THrough Evolutionary Relationships (PANTHER) Classification System and Genotype-Tissue Expression (GTEx) portal. The levels of the identified protein targets were measured in the plasma of Event+ and Event− patients to validate the data derived from proteomic analysis and calculate the required sample size for the Validation Cohort . To elucidate the mechanisms through which NOTCH3 mitigates vascular damage leading to cardiovascular recurrence, animal stroke and atherosclerosis studies were conducted. Stroke Mouse Model and Western Blot Analysis A mouse model of middle cerebral artery occlusion (MCAO) was employed to recapitulate transient focal cerebral ischemia and reperfusion. Three-month-old male C57BL/6NTac mice housed in the animal facilities at the National University of Singapore, Singapore, were randomly selected for inducing experimental ischemic stroke . Mice were included in the study if they successfully underwent MCAO, which was defined by an 80% or greater drop in cerebral blood flow and subsequent recovery of cortical blood flow to its basal level after reperfusion, as confirmed using laser Doppler flowmetry. At each experimental time-point, whole blood was collected from the inferior vena cava of each mouse under isoflurane anesthesia. The blood samples were allowed to clot at room temperature for 30 min and then centrifuged at 2000 g for 10 min to separate sera, which were stored at −80°C until further western blot analysis . Atherosclerosis Mouse Model and Immunohistochemical Staining Wild-type (WT) and Apoe−/− mice on a C57/BL6 background were obtained from the Jackson Laboratory (Bar Harbor, ME) and were housed in the animal facilities at the National University of Singapore, Singapore. To induce the development of atherosclerosis, the mice were fed a high-fat diet (21% fat and 0.15% cholesterol) starting from 6 weeks of age and continued for 16–18 weeks. The Apoe−/− mice exhibited hypercholesterolemia due to impaired lipoprotein clearance, resulting in the formation of atherosclerotic plaques. The absence of Apoe expression also led to systemic inflammation and degradation of extracellular matrix, exacerbating the severity of atherosclerotic lesions in vivo . For immunohistochemical staining, innominate arteries were fixed in 2% paraformaldehyde with 30% sucrose at 4°C, embedded in Tissue-Tek Optimum Cutting Temperature (OCT) compound, and sectioned at 10 µm intervals using a cryotome at the Advanced Molecular Pathology Laboratory, IMCB, Singapore . All experimental procedures involving animals were approved by the National University of Singapore Institutional Animal Care and Use Committee (IACUC). Isolation of Circulating Endothelial Cells for Flow Cytometry, Cell Sorting and Quantitative PCR To isolate circulating endothelial cells (CECs), independent biological replicates were created by pooling peripheral blood mononuclear cell (PBMC) samples from stroke patients ( n = 7 per biological replicate) and non-stroke controls ( n = 7 per biological replicate) . Rigorous criteria were applied to select the relevant CEC population from plasma samples of stroke subjects. Anucleated cells such as platelets were excluded using Hoechst 33342 nucleic acid stain, and cells of hematopoietic lineages were excluded using the CD45 marker . Additionally, bone marrow-derived endothelial progenitor cells were excluded using CD133, in conjunction with platelet and endothelial cell adhesion molecule 1 (PECAM1), resulting in the isolation of CECs with a marker profile of DNA+/CD45−/CD133−/PECAM1+. Total RNA was extracted from the Fluorescence-activated cell sorting (FACS)-sorted cells and cultured cells using the RNeasy Plus Micro kit (Qiagen). Subsequently, cDNA was synthesized following the manufacturers’ instructions using either the Sgenix TeraScript cDNA synthesis kit (Sgenix) or LunaScript RT SuperMix kit (New England BioLabs). Gene expression analyses were carried out with gene-specific primers and GAPDH was used as the endogenous control. The qPCR reactions were performed using the Sgenix Tera-Cybr qPCR kit (Sgenix) or Luna Universal qPCR Master Mix (New England BioLabs) and run on the Quant-Studio 6 Flex system (Applied Biosystems). Statistical Analysis Categorical variables were presented as numbers ( n ) and percentages , while continuous variables were reported as mean (standard deviation) or median (interquartile range), depending on their distribution. To compare continuous variables, we used the Student t -test or Wilcoxon rank-sum test as appropriate. Categorical variables were compared using the χ 2 test, and we applied the Bonferroni method to correct for multiple comparisons. Stepwise multivariable regression methods were employed to identify clinical and blood biomarker predictors of plasma NOTCH3. Cumulative event-free rates were calculated using the Kaplan–Meier method, and differences in events based on plasma NOTCH3 were tested using the log-rank method. Cox proportional hazard modeling was used to compute and adjust the hazard ratio (95% confidence interval, CI) between NOTCH3 quartiles and recurrent vascular events. Sample size calculation was performed using the pwr package (version: 1.1-2) from Bioconductor. To examine the association between protein targets and clinical outcomes, logistic regression methods were used to derive odds ratios and 95% confidence interval (CI). In animal models, differences between experimental and control groups were determined using the 2-tailed Student’s t -test or non-parametric Mann–Whitney test. SPSS Statistics version 27 from IBM Corporation was utilized for all analyses, and GraphPad Prism software was used to create graphs. Statistical significance was considered when P < 0.05. From January to December 2016, we conducted a prospective recruitment of consecutive patients diagnosed with acute ischemic stroke at the National University Hospital, Singapore. The diagnosis was established based on corroborative history-taking, neurological examination, and neuroimaging. Comprehensive information on demographics, risk factors, stroke severity, and mechanisms was collected. Stroke severity was assessed using the National Institutes of Health Stroke Scale (NIHSS). To investigate stroke mechanisms, angiography (CT or MRI), echocardiogram and 24-h electrocardiogram investigations were performed. Based on the results of these investigations, patients were categorized into different stroke subtypes using the Trial of Org 10172 in Acute Stroke Treatment (TOAST) criteria, including large artery disease, cardioembolism, small artery disease, undetermined causes and other causes. Exclusion criteria included patients below 21 years old, pregnant individuals, those with intracranial hemorrhage, active cancer and autoimmune diseases. The primary endpoint of the study was the occurrence of a composite outcome comprising cerebrovascular events (non-fatal stroke and transient ischemic attack [TIA]) and coronary artery events (non-fatal acute myocardial infarction, unstable angina and fatal myocardial infarction). Follow-up information on outcomes was collected through direct or telephone interviews with patients, family members, or caregivers, until the occurrence of an event or until July 2019, whichever came earlier. A blinded committee of investigators adjudicated the study outcomes against medical records. Verification of all causes of death was performed using the Registry of Birth and Death, Ministry of Home Affairs, Singapore. As a comparison group, we recruited age- and sex-matched healthy individuals without vascular diseases. Blood samples were collected from the participants using heparin tubes through venepuncture. The blood was then immediately processed to obtain plasma and peripheral blood mononuclear cells (PBMC) (see for additional details). Microvesicles were isolated from the plasma samples by first de-fibrinating them using thrombin, creating a fibrin-free serum-like fraction without clotting factors. Subsequently, the fibrin-free samples were enriched for microvesicles through sequential ultracentrifugation. All the collected samples were stored at the Tissue Repository Unit of our hospital. To gain insights into the potential mechanisms of the identified candidate protein, we measured plasma levels of NT-brain natriuretic peptide (NT-proBNP), interleukin-6, S100β, cortisol and insulin. These proteins target various aspects of stroke pathogenesis and were analyzed using the Roche e-411 analyzer (Roche Diagnostics, Switzerland). The study was approved by the Domain-Specific Review Board, National Healthcare Group and was carried out in accordance with the principles of the Declaration of Helsinki and Good Clinical Practice. All participants provided written informed consent after receiving detailed explanations by trained individuals. Ischemic stroke patients were separated into two cohorts, namely the ‘Derivation’ and ‘Validation’ cohorts ( and ). In the derivation phase, microvesicles were isolated from the plasma of 48 stroke patients, with 24 patients experiencing cardiovascular recurrence (Event+) and 24 patients without recurrence (Event−) . The sample size of 24 subjects per outcome group was predetermined to achieve a significance level of 10 −5 with 80% statistical power, considering a Cohen effect size of 1.8. Representative electron microscopy images of microvesicles isolated from plasma are shown in . For proteomic analysis, four biologically independent replicates were selected for each Event+ and Event− group, with each replicate pooled from microvesicles of six patients. The pooled microvesicles from equal volumes of plasma underwent reduction, alkylation and digestion using Lys-C and trypsin. A tandem Mas Tag (TMT)-based quantitative proteomics method was employed to analyze microvesicles and compare protein expressions between the Event+ and Event− groups . Candidate protein targets were identified by examining their potential involvement in cerebral and cardiac vasculature using the Protein ANalysis THrough Evolutionary Relationships (PANTHER) Classification System and Genotype-Tissue Expression (GTEx) portal. The levels of the identified protein targets were measured in the plasma of Event+ and Event− patients to validate the data derived from proteomic analysis and calculate the required sample size for the Validation Cohort . To elucidate the mechanisms through which NOTCH3 mitigates vascular damage leading to cardiovascular recurrence, animal stroke and atherosclerosis studies were conducted. A mouse model of middle cerebral artery occlusion (MCAO) was employed to recapitulate transient focal cerebral ischemia and reperfusion. Three-month-old male C57BL/6NTac mice housed in the animal facilities at the National University of Singapore, Singapore, were randomly selected for inducing experimental ischemic stroke . Mice were included in the study if they successfully underwent MCAO, which was defined by an 80% or greater drop in cerebral blood flow and subsequent recovery of cortical blood flow to its basal level after reperfusion, as confirmed using laser Doppler flowmetry. At each experimental time-point, whole blood was collected from the inferior vena cava of each mouse under isoflurane anesthesia. The blood samples were allowed to clot at room temperature for 30 min and then centrifuged at 2000 g for 10 min to separate sera, which were stored at −80°C until further western blot analysis . Wild-type (WT) and Apoe−/− mice on a C57/BL6 background were obtained from the Jackson Laboratory (Bar Harbor, ME) and were housed in the animal facilities at the National University of Singapore, Singapore. To induce the development of atherosclerosis, the mice were fed a high-fat diet (21% fat and 0.15% cholesterol) starting from 6 weeks of age and continued for 16–18 weeks. The Apoe−/− mice exhibited hypercholesterolemia due to impaired lipoprotein clearance, resulting in the formation of atherosclerotic plaques. The absence of Apoe expression also led to systemic inflammation and degradation of extracellular matrix, exacerbating the severity of atherosclerotic lesions in vivo . For immunohistochemical staining, innominate arteries were fixed in 2% paraformaldehyde with 30% sucrose at 4°C, embedded in Tissue-Tek Optimum Cutting Temperature (OCT) compound, and sectioned at 10 µm intervals using a cryotome at the Advanced Molecular Pathology Laboratory, IMCB, Singapore . All experimental procedures involving animals were approved by the National University of Singapore Institutional Animal Care and Use Committee (IACUC). To isolate circulating endothelial cells (CECs), independent biological replicates were created by pooling peripheral blood mononuclear cell (PBMC) samples from stroke patients ( n = 7 per biological replicate) and non-stroke controls ( n = 7 per biological replicate) . Rigorous criteria were applied to select the relevant CEC population from plasma samples of stroke subjects. Anucleated cells such as platelets were excluded using Hoechst 33342 nucleic acid stain, and cells of hematopoietic lineages were excluded using the CD45 marker . Additionally, bone marrow-derived endothelial progenitor cells were excluded using CD133, in conjunction with platelet and endothelial cell adhesion molecule 1 (PECAM1), resulting in the isolation of CECs with a marker profile of DNA+/CD45−/CD133−/PECAM1+. Total RNA was extracted from the Fluorescence-activated cell sorting (FACS)-sorted cells and cultured cells using the RNeasy Plus Micro kit (Qiagen). Subsequently, cDNA was synthesized following the manufacturers’ instructions using either the Sgenix TeraScript cDNA synthesis kit (Sgenix) or LunaScript RT SuperMix kit (New England BioLabs). Gene expression analyses were carried out with gene-specific primers and GAPDH was used as the endogenous control. The qPCR reactions were performed using the Sgenix Tera-Cybr qPCR kit (Sgenix) or Luna Universal qPCR Master Mix (New England BioLabs) and run on the Quant-Studio 6 Flex system (Applied Biosystems). Categorical variables were presented as numbers ( n ) and percentages , while continuous variables were reported as mean (standard deviation) or median (interquartile range), depending on their distribution. To compare continuous variables, we used the Student t -test or Wilcoxon rank-sum test as appropriate. Categorical variables were compared using the χ 2 test, and we applied the Bonferroni method to correct for multiple comparisons. Stepwise multivariable regression methods were employed to identify clinical and blood biomarker predictors of plasma NOTCH3. Cumulative event-free rates were calculated using the Kaplan–Meier method, and differences in events based on plasma NOTCH3 were tested using the log-rank method. Cox proportional hazard modeling was used to compute and adjust the hazard ratio (95% confidence interval, CI) between NOTCH3 quartiles and recurrent vascular events. Sample size calculation was performed using the pwr package (version: 1.1-2) from Bioconductor. To examine the association between protein targets and clinical outcomes, logistic regression methods were used to derive odds ratios and 95% confidence interval (CI). In animal models, differences between experimental and control groups were determined using the 2-tailed Student’s t -test or non-parametric Mann–Whitney test. SPSS Statistics version 27 from IBM Corporation was utilized for all analyses, and GraphPad Prism software was used to create graphs. Statistical significance was considered when P < 0.05. Candidate Biomarkers of Cardiovascular Recurrence Plasma-derived microvesicles from the ‘Derivation Cohort’ were subjected to proteomic profiling, resulting in the identification of 887 proteins. Among patients in the Event+ and Event− groups, 25 and 46 proteins, respectively, showed significant upregulation ( and ). The upregulated proteins in the Event+ group were functionally annotated, revealing pathways associated with inflammation, PI3 kinase and Notch signaling in various vascular diseases . By contrast, upregulated proteins in the Event- group were linked to toll receptor signaling pathway. Notably, toll-like receptors are believed to worsen ischemic injury, but their brief stimulation prior to ischemia has been shown to be neuroprotective. Among the upregulated proteins in the Event+ group, several were associated with vascular dysfunction. For instance, NOTCH3 was identified as a key contributor to diabetic vasculopathy, while kallikrein (KLKB1) was found to drive atherosclerosis in diabetes and metabolic syndrome. Lymphatic vessel endothelial hyaluronan receptor 1 (LYVE1) present on resident macrophages of the aorta was shown to regulate arterial stiffness and collagen deposition. Additionally, milk fat globule-EGF factor 8 (MFGE8), which is involved in smooth muscle cell migration and proliferation, was found to confer an increased risk of microvascular complications in type 2 diabetes. Furthermore, genetic risk variants in complement factor H (CFH) were linked to age-related macular degeneration due to aberrant growth of the choroidal vasculature. To explore vasculopathy-related proteins, the GTEx portal was utilized, revealing that while microvesicles are primarily of immune origin (mainly platelets), certain targets such as NOTCH3 and MFGE8 are predominantly expressed in arterial tissues ( and ), supporting their potential implication in vascular pathophysiology. Considering the mechanistic links between NOTCH3 and the cardiovasculature, , the focus of the investigation was directed toward NOTCH3, which was further examined for its significance in an independent patient cohort, as well as in animal stroke and atherosclerosis models. Plasma NOTCH3 and Its Relationship with Cardiovascular Recurrence in Ischemic Stroke Patients We first sought to validate our previous findings using a commercial ELISA assay (Human NOTCH3 ELISA kit, Cusabio) on plasma samples from the same group of patients. The assay had a detection range of 125–8000 pg/ml and a lowest detection limit of 31.25 pg/ml. The results from this ELISA assay confirmed a significant increase in plasma NOTCH3 levels among Event+ patients when compared to Event− patients . To determine the required sample size for the study, we considered the differences in plasma NOTCH3 levels between the Event+ and Event− groups and assumed a cardiovascular recurrence rate of 20%. Based on these considerations, we estimated that a minimum sample size of 420 subjects would be necessary to reject the null hypothesis that there were no differences in plasma NOTCH3 levels between the two outcome groups, with a probability of 0.80 and a type 1 error probability of 0.05. Out of the initial 480 patients with ischemic stroke who were assessed for eligibility, 431 consecutive patients with acute ischemic stroke were recruited for the study and closely followed for the development of cardiovascular recurrence. This group of patients was referred to as the ‘Validation Cohort’ and stroke patients from Derivation Cohort were excluded from this analysis. The mean age of the recruited patients in the Validation Cohort was 59.1 years, with 68% being men. The mechanism of stroke was categorized as follows: large artery disease (30%), cardioembolism (10%), small vessel disease (43%) and undetermined etiology (16%) . Throughout a median follow-up period of 3.5 years (interquartile range, 1.8–4.6 years), a total of 102 cardiovascular events occurred, with an incidence rate of 6.30 events per 1000 person-months. These events included 64 cerebrovascular events (11 fatal strokes, 53 non-fatal strokes and 8 transient ischemic attacks) and 38 coronary artery events (8 fatal myocardial infarctions and 30 non-fatal myocardial infarctions/unstable angina). Significant differences were observed between gender groups. Women were found to be older (63 vs. 57 years) and had greater stroke severity as measured by the NIH Stroke Scale (NIHSS) score (3 vs. 2). Additionally, women had a higher prevalence of atrial fibrillation (8% vs. 3%) and showed higher levels of platelet count (261 vs. 245 × 10 9 /l) and high-density lipoproteins (HDL) (1.24 vs. 1.04 mmol/l). By comparison, men had a higher frequency of cigarette use (58% vs. 11%) and higher levels of leukocytes (8.48 vs. 7.92 × 10 9 /l) and triglycerides (1.76 vs. 1.48 mmol/l). No significant differences were observed between genders concerning other risk factors, stroke mechanisms, and reperfusion treatment. Blood samples were collected from stroke patients at a median of 3 days (range, 1–4 days) after the onset of stroke. Plasma NOTCH3 were significantly higher in stroke patients compared with age-matched controls , particularly in those who later experienced cardiovascular recurrence (median, 1265 pg/ml vs. 1104 pg/ml). Multivariable regression analysis identified several factors that were identified as significant predictors of plasma NOTCH3 levels in both men and women. In men, these predictors included chronic renal failure, peripheral artery disease, and NT-proBNP, which together accounted for 43% of the variations in plasma NOTCH3 levels. For women, chronic renal failure and previous stroke were significant predictors, explaining 47% of the variations in plasma NOTCH3 levels . No significant correlations were observed between plasma NOTCH3 and interleukin-6, S100β, cortisol and insulin (data not shown). In men, patients with plasma NOTCH3 levels greater than 1600 pg/ml were found to have a higher predisposition to developing cardiovascular recurrence compared to those with levels less than 800 pg/ml (adjusted hazards ratio 2.29, 95% CI 1.10–4.77) . However, no significant relationship was observed between plasma NOTCH3 levels and cardiovascular recurrence in women. Upregulation of NOTCH3 Expression in Mouse Models of Stroke and Atherosclerosis A transient mouse stroke model was utilized to investigate the changes in NOTCH3 expression over time following cerebral ischemia. The levels of NOTCH3 expression in mouse sera were compared to sham controls. It was observed that during middle cerebral artery (MCA) occlusion, NOTCH3 expression was higher and reached its peak at 24 h after reperfusion, before gradually decreasing at 72 h ( and ). To explore the potential connection between NOTCH3 and cardiovascular diseases, immunofluorescence studies were conducted to examine NOTCH3 expression in the arterial wall of two groups: wild-type mice and atherosclerotic Apoe knockout (Apoe−/−) mice. By 22–24 weeks of age, Apoe−/− mice had developed extensive atherosclerotic plaques in major arteries, including the inominate arteries . The plaques contained a significant number of inflammatory cells, including macrophages. Smooth muscle cells migrated from the media to the plaque and intima, contributing to the formation of the lipid-rich core and fibrous cap. In wild-type mice and plaque-free Apoe−/− mice, NOTCH3 was predominantly expressed in the vascular smooth muscle cells lining the arteries. Conversely, in Apoe−/− mice with plaque formation, NOTCH3 was found to co-localize with PECAM1, an endothelial marker, specifically in the endothelial lining of the regions affected by plaque, whereas this co-localization was minimal in the wild-type mice or in plaque-free regions of Apoe−/− mice . An inverse relationship was observed between the percentage of Notch3 expression in the total endothelial layer and the percentage of plaque area relative to the total luminal area . In contrast, a positive correlation was noted between the extent of endothelial layer disruption and the percentage of plaque area relative to the total luminal area. Elevated levels of Notch3 expression correlated with a reduced plaque burden, indicating a potential protective role of Notch3 in maintaining endothelial integrity and inhibiting plaque formation. While these findings could also be attributed to the greater loss of endothelial cells lining the more advanced plaque lesions, they highlight the dynamic changes in NOTCH3 expression following cerebral ischemia in the mouse stroke model and suggest NOTCH3-associated endothelial alterations during the progression of atherosclerosis. Circulating Endothelial Cells Suggest Endothelial Instability following Ischemic Stroke Compared with age-matched controls, stroke patients exhibited a decrease in the absolute percentage of circulating endothelial cells (CECs) within the total peripheral blood mononuclear cell (PBMC) fraction . Despite the lower levels, CECs from stroke patients showed significantly higher expression of the NOTCH3 gene compared to CECs from normal controls . Additionally, the expression of fibroblast growth factor receptor 1 (FGFR1) was found to be significantly lower in CECs of stroke patients when compared to controls. This downregulation of FGFR1 is noteworthy because FGFR1 is a crucial inhibitor of endothelial-to-mesenchymal transition (EndMT). Such a decrease in FGFR1 expression may indicate a greater burden of atherosclerosis in stroke patients, consistent with findings from previous studies in mice and patients with coronary artery disease. However, there were no significant differences observed in the expression of mesenchymal genes such as FN1 and ACTA2. Collectively, these findings suggest that endothelial instability might be a contributing factor to NOTCH3-associated atherosclerosis. Plasma-derived microvesicles from the ‘Derivation Cohort’ were subjected to proteomic profiling, resulting in the identification of 887 proteins. Among patients in the Event+ and Event− groups, 25 and 46 proteins, respectively, showed significant upregulation ( and ). The upregulated proteins in the Event+ group were functionally annotated, revealing pathways associated with inflammation, PI3 kinase and Notch signaling in various vascular diseases . By contrast, upregulated proteins in the Event- group were linked to toll receptor signaling pathway. Notably, toll-like receptors are believed to worsen ischemic injury, but their brief stimulation prior to ischemia has been shown to be neuroprotective. Among the upregulated proteins in the Event+ group, several were associated with vascular dysfunction. For instance, NOTCH3 was identified as a key contributor to diabetic vasculopathy, while kallikrein (KLKB1) was found to drive atherosclerosis in diabetes and metabolic syndrome. Lymphatic vessel endothelial hyaluronan receptor 1 (LYVE1) present on resident macrophages of the aorta was shown to regulate arterial stiffness and collagen deposition. Additionally, milk fat globule-EGF factor 8 (MFGE8), which is involved in smooth muscle cell migration and proliferation, was found to confer an increased risk of microvascular complications in type 2 diabetes. Furthermore, genetic risk variants in complement factor H (CFH) were linked to age-related macular degeneration due to aberrant growth of the choroidal vasculature. To explore vasculopathy-related proteins, the GTEx portal was utilized, revealing that while microvesicles are primarily of immune origin (mainly platelets), certain targets such as NOTCH3 and MFGE8 are predominantly expressed in arterial tissues ( and ), supporting their potential implication in vascular pathophysiology. Considering the mechanistic links between NOTCH3 and the cardiovasculature, , the focus of the investigation was directed toward NOTCH3, which was further examined for its significance in an independent patient cohort, as well as in animal stroke and atherosclerosis models. We first sought to validate our previous findings using a commercial ELISA assay (Human NOTCH3 ELISA kit, Cusabio) on plasma samples from the same group of patients. The assay had a detection range of 125–8000 pg/ml and a lowest detection limit of 31.25 pg/ml. The results from this ELISA assay confirmed a significant increase in plasma NOTCH3 levels among Event+ patients when compared to Event− patients . To determine the required sample size for the study, we considered the differences in plasma NOTCH3 levels between the Event+ and Event− groups and assumed a cardiovascular recurrence rate of 20%. Based on these considerations, we estimated that a minimum sample size of 420 subjects would be necessary to reject the null hypothesis that there were no differences in plasma NOTCH3 levels between the two outcome groups, with a probability of 0.80 and a type 1 error probability of 0.05. Out of the initial 480 patients with ischemic stroke who were assessed for eligibility, 431 consecutive patients with acute ischemic stroke were recruited for the study and closely followed for the development of cardiovascular recurrence. This group of patients was referred to as the ‘Validation Cohort’ and stroke patients from Derivation Cohort were excluded from this analysis. The mean age of the recruited patients in the Validation Cohort was 59.1 years, with 68% being men. The mechanism of stroke was categorized as follows: large artery disease (30%), cardioembolism (10%), small vessel disease (43%) and undetermined etiology (16%) . Throughout a median follow-up period of 3.5 years (interquartile range, 1.8–4.6 years), a total of 102 cardiovascular events occurred, with an incidence rate of 6.30 events per 1000 person-months. These events included 64 cerebrovascular events (11 fatal strokes, 53 non-fatal strokes and 8 transient ischemic attacks) and 38 coronary artery events (8 fatal myocardial infarctions and 30 non-fatal myocardial infarctions/unstable angina). Significant differences were observed between gender groups. Women were found to be older (63 vs. 57 years) and had greater stroke severity as measured by the NIH Stroke Scale (NIHSS) score (3 vs. 2). Additionally, women had a higher prevalence of atrial fibrillation (8% vs. 3%) and showed higher levels of platelet count (261 vs. 245 × 10 9 /l) and high-density lipoproteins (HDL) (1.24 vs. 1.04 mmol/l). By comparison, men had a higher frequency of cigarette use (58% vs. 11%) and higher levels of leukocytes (8.48 vs. 7.92 × 10 9 /l) and triglycerides (1.76 vs. 1.48 mmol/l). No significant differences were observed between genders concerning other risk factors, stroke mechanisms, and reperfusion treatment. Blood samples were collected from stroke patients at a median of 3 days (range, 1–4 days) after the onset of stroke. Plasma NOTCH3 were significantly higher in stroke patients compared with age-matched controls , particularly in those who later experienced cardiovascular recurrence (median, 1265 pg/ml vs. 1104 pg/ml). Multivariable regression analysis identified several factors that were identified as significant predictors of plasma NOTCH3 levels in both men and women. In men, these predictors included chronic renal failure, peripheral artery disease, and NT-proBNP, which together accounted for 43% of the variations in plasma NOTCH3 levels. For women, chronic renal failure and previous stroke were significant predictors, explaining 47% of the variations in plasma NOTCH3 levels . No significant correlations were observed between plasma NOTCH3 and interleukin-6, S100β, cortisol and insulin (data not shown). In men, patients with plasma NOTCH3 levels greater than 1600 pg/ml were found to have a higher predisposition to developing cardiovascular recurrence compared to those with levels less than 800 pg/ml (adjusted hazards ratio 2.29, 95% CI 1.10–4.77) . However, no significant relationship was observed between plasma NOTCH3 levels and cardiovascular recurrence in women. A transient mouse stroke model was utilized to investigate the changes in NOTCH3 expression over time following cerebral ischemia. The levels of NOTCH3 expression in mouse sera were compared to sham controls. It was observed that during middle cerebral artery (MCA) occlusion, NOTCH3 expression was higher and reached its peak at 24 h after reperfusion, before gradually decreasing at 72 h ( and ). To explore the potential connection between NOTCH3 and cardiovascular diseases, immunofluorescence studies were conducted to examine NOTCH3 expression in the arterial wall of two groups: wild-type mice and atherosclerotic Apoe knockout (Apoe−/−) mice. By 22–24 weeks of age, Apoe−/− mice had developed extensive atherosclerotic plaques in major arteries, including the inominate arteries . The plaques contained a significant number of inflammatory cells, including macrophages. Smooth muscle cells migrated from the media to the plaque and intima, contributing to the formation of the lipid-rich core and fibrous cap. In wild-type mice and plaque-free Apoe−/− mice, NOTCH3 was predominantly expressed in the vascular smooth muscle cells lining the arteries. Conversely, in Apoe−/− mice with plaque formation, NOTCH3 was found to co-localize with PECAM1, an endothelial marker, specifically in the endothelial lining of the regions affected by plaque, whereas this co-localization was minimal in the wild-type mice or in plaque-free regions of Apoe−/− mice . An inverse relationship was observed between the percentage of Notch3 expression in the total endothelial layer and the percentage of plaque area relative to the total luminal area . In contrast, a positive correlation was noted between the extent of endothelial layer disruption and the percentage of plaque area relative to the total luminal area. Elevated levels of Notch3 expression correlated with a reduced plaque burden, indicating a potential protective role of Notch3 in maintaining endothelial integrity and inhibiting plaque formation. While these findings could also be attributed to the greater loss of endothelial cells lining the more advanced plaque lesions, they highlight the dynamic changes in NOTCH3 expression following cerebral ischemia in the mouse stroke model and suggest NOTCH3-associated endothelial alterations during the progression of atherosclerosis. Compared with age-matched controls, stroke patients exhibited a decrease in the absolute percentage of circulating endothelial cells (CECs) within the total peripheral blood mononuclear cell (PBMC) fraction . Despite the lower levels, CECs from stroke patients showed significantly higher expression of the NOTCH3 gene compared to CECs from normal controls . Additionally, the expression of fibroblast growth factor receptor 1 (FGFR1) was found to be significantly lower in CECs of stroke patients when compared to controls. This downregulation of FGFR1 is noteworthy because FGFR1 is a crucial inhibitor of endothelial-to-mesenchymal transition (EndMT). Such a decrease in FGFR1 expression may indicate a greater burden of atherosclerosis in stroke patients, consistent with findings from previous studies in mice and patients with coronary artery disease. However, there were no significant differences observed in the expression of mesenchymal genes such as FN1 and ACTA2. Collectively, these findings suggest that endothelial instability might be a contributing factor to NOTCH3-associated atherosclerosis. Epidemiological evidence indicates that individuals who have experienced an ischemic stroke are at a higher risk of developing another stroke or myocardial infarction. In this study, we investigated protein expressions in microvesicles and identified NOTCH3 as a potential biomarker of cardiovascular recurrence. We subjected our findings to rigorous validation using an independent patient cohort and provided valuable preclinical evidence that sheds light on the role of endothelial instability as a potential mechanism in NOTCH3-mediated atherosclerosis. To the best of our knowledge, this is the first study to employ a meticulous proteomic approach to elucidate the biological predispositions that underlie cardiovascular recurrence in patients with ischemic stroke. NOTCH3 is a transmembrane receptor known for its role in maintaining blood vessel integrity and blood–brain barrier function. , Identification of NOTCH3 resonates closely with a genetic stroke syndrome, Cerebral Autosomal Dominant Arteriopathy with Subcortical Infarcts and Leukoencephalopathy (CADASIL), where mutations in the NOTCH3 gene are linked to an early-onset stroke and dementia clinical phenotype. Previous studies have also demonstrated that certain cysteine-altering NOTCH3 variants increase the risk of stroke among Taiwanese Chinese and Caucasians from the Geisinger DiscovEHR initiative cohort. In this study, we found that ischemic stroke patients with higher levels of plasma NOTCH3 had a 2-fold increased risk of experiencing another cardiovascular event during a median follow-up period of 3.5 years. To further investigate the involvement of NOTCH3 in stroke pathogenesis, we examined animal stroke models that exhibited a rapid increase in NOTCH3 levels immediately following cerebral ischemia, and the elevated levels persisted for an additional 72 h. Interestingly, we observed that the association between NOTCH3 and cardiovascular recurrence was significant in male stroke patients but not in female patients despite comparable levels of plasma NOTCH3 at baseline. Notably, registry data of CADASIL patients have also reported sex-specific differences, with men being more susceptible to immobility, experiencing a poorer quality of life, and having a shorter life expectancy compared to women. We observed a close correlation between circulating levels of NOTCH3 and end-stage cardiovascular diseases such as chronic renal failure, peripheral artery disease, and stroke. Additionally, there was a notable association between NOTCH3 levels and NT-proBNP levels, which is a biomarker of vascular damage, suggesting a potential link between NOTCH3 and atherosclerosis. The significance of the endothelium in NOTCH3-associated atherosclerosis was supported by our immunofluorescence findings. We observed increased expression of NOTCH3 co-localizing with the endothelium of defective arterial lining and atherosclerotic plaques in Apoe−/− mice. Moreover, we detected higher gene expression of NOTCH3 in CECs of stroke patients. Although NOTCH3 gene mutations in patients with CADASIL are associated with an atrophy of vascular smooth muscle cells (VSMCs), , our findings point to an increased expression of NOTCH3 in the endothelium in preclinical and human stroke. The involvement of the endothelium is not surprising since mutations in NOTCH3 genes often lead to alterations in cysteine residues, which have been linked to endothelial damage. Furthermore, endothelial cells are known to selectively increase NOTCH3 expression through a process known as endothelial-to-mesenchymal transition (EndMT), which facilitates a phenotypic switch of endothelial cells to assume properties of mesenchymal cells. Previous studies have also reported lower levels of FGFR1 in mouse atherosclerosis models and human atherosclerotic lesions. As FGFR1 suppresses EndMT, the reduced FGFR1 gene expression observed in CECs of stroke patients compared to control levels suggests EndMT involvement in the pathogenesis of NOTCH3-mediated atherosclerosis. This process could contribute to compromised vessel integrity and suboptimal vascular remodeling following cerebral ischemia. Shed from lining of the vascular wall into bloodstream, the percentages of CECs are thought to vary considerably in health and among those with cardiovascular diseases. Interestingly, we unexpectedly observed fewer CECs in patients approximately 3 days after the onset of cerebral ischemia compared to age-matched controls. This finding is somewhat contrary to expectations since CEC levels are typically higher in stroke patients due to the greater burden of atherosclerosis. We propose that the lower levels of CECs might be explained by vascular protection conferred by antiplatelet and statin treatments administered during hospitalization which could potentially reduce the number of endothelial cells shed from the vascular lining in these patients and could explain the fewer number of endothelial cells shed from vascular lining in these patients. There are several limitations that warrant discussion. First, the levels of NOTCH3 were measured only once and were not assessed serially in the weeks and months following acute stroke. This leaves uncertainty regarding whether the elevation in NOTCH3 would persist over time and whether the existing secondary prevention treatments could have an impact on these levels. Second, we did not anticipate the possibility of sex-specific differences in the association between NOTCH3 and cardiovascular recurrence. As a result, our study may have been underpowered to detect changes in female patients, potentially leading to the risk of type 1 errors. Third, the samples used for isolating CECs were obtained after the administration of antiplatelet and statin treatment. This raises the question of whether the lower levels of CECs observed could be influenced by the treatment administered following stroke hospitalization. Fourth, our study did not include patients with hemorrhagic stroke. Consequently, we cannot generalize our findings to individuals with hemorrhage stroke. Using a carefully characterized cohort with close monitoring of cardiovascular recurrence, the data presented in this study offer compelling evidence supporting the role of NOTCH3 in stroke pathogenesis and its association with the risk of cardiovascular recurrence. Future research to delve into the functional significance of NOTCH3 signaling in both vascular smooth muscle cells and the endothelium, and explore the potential for utilizing insights into NOTCH3 signaling and atherosclerosis for the development of stroke pharmacotherapeutics. hcae136_Supplementary_Data |
Multi-Omics Sequencing Dissects the Atlas of Seminal Plasma Exosomes from Semen Containing Low or High Rates of Sperm with Cytoplasmic Droplets | 36686176-468b-4cbf-8b61-17de2a97ac84 | 11817786 | Biochemistry[mh] | Cytoplasmic droplets (CDs) are formed in the testicular spermatogenic epithelium. CDs are the cytoplasmic residue of round spermatid cytoplasm that has been phagocytosed by Sertoli cells . The cytoplasmic droplet is a marker of normal sperm morphology and plays an important role in the maturation of epididymal sperm . One of the earlier findings of CDs was that CDs move in a peristaltic-like manner along the midpiece of the flagellum from the neck to the annulus as spermatozoa pass through the epididymal duct . Current research suggests that CDs are a temporary source of energy for sperm as they mature in the epididymis, and they can exchange small RNAs (mainly tsRNA and rsRNA) and proteins with the sperm . This is bound to have a direct effect on the sperm. However, if CDs are not shed during ejaculation, they can interfere with the morphology and function of sperm. CDs remaining on the sperm will increase the osmotic pressure inside the sperm, causing it to absorb water, swell, and rupture or form a corner; it also disrupts sperm motility . In severe cases, it can lead to male sterility . Many surveys show that CD retention is considered to be the most common abnormal sperm morphology in boar semen and the main reason for reducing the utilization rate of boar semen . This is because the retention of CDs on ejaculated spermatozoa reduces the female’s pregnancy rate, delivery rate, and litter size . Environmental conditions, nutritional conditions, and age can affect CDs . Boars living in environments with high temperature and humidity have higher residual rates of CDs . It has also been shown that the deletion of sperm1 - and 15-lipoxygenase genes causes the CDs to fail to drop off the sperm in mice . New research has revealed that the SYPL1 gene is enriched in the cytoplasmic droplets of sperm, and its absence results in the failure to produce sperm protoplasmic droplets, leading to a significant reduction in fertility in mice . Semen plasma is the liquid part of semen excluding cells, which constitutes the survival and maturation environment of sperm and regulates the movement and morphology of sperm . The seminal plasma contains a large number of lipid particles. Among them, the major components of seminal exosomes are prostasomes and epididymosomes that are secreted by the prostate and epididymis, respectively . The exosomes in the seminal plasma are very important for sperm motility, morphology, acrosome response, capacitation, and fertilization . They could alter the lipid composition of the sperm membrane and assist in the production of future sperm motility and the ability to penetrate the zona pellucida . Meanwhile, ATP generated in seminal plasma exosomes can finely modulate mitochondrial metabolism to regulate sperm motility . We previously analyzed boar seminal plasma exosomes from semen containing spermatozoa with or without CDs and identified 16 significantly different miRNAs . This suggests that exosomes in seminal plasma may have an effect on sperm CDs. At present, no other relevant reports have been reported, and it not clear how the exosomes in the seminal plasma can affect the shedding of sperm CDs. Profiling exosomal proteins, mRNAs, and lncRNAs can be helpful for the identification of molecular markers for diagnosis and prognosis and for closure of knowledge gaps regarding the shedding of CDs. In this study, we performed a multi-omics analysis on the cargos in exosomes from semen containing sperm with high or low rates of CDs to systematically elucidate the biological processes related to sperm CDs. The results of this study may help to generate new perspectives on the shedding of sperm CDs and ultimately provide a new scheme for improving the quality and utilization rate of boar semen. 2.1. Characterization of Exosomes Derived from Seminal Plasma A schematic procedure for the study is shown in A. Boar seminal plasma was obtained from semen containing sperm with low or high rates of cytoplasmic droplets, and exosomes were isolated from the seminal plasma. The CD rates were all less than 3% in the low group and more than 14% in the high group ( B). A DIA-based proteomics strategy and high-throughput sequencing approach were used to quantitate exosome cargos, including proteins, mRNAs, and lncRNAs. To characterize the exosomes, we performed TEM, NTA, and immunoblotting. The TEM findings revealed that exosomes had the usual cup shape ( C). The NTA findings revealed that the concentration of isolated exosomes was 1.9 × 10 12 particles/mL and that exosomes were between 50 and 150 nm in diameter, which is compatible with the reported exosome size ( D). In addition, the immunoblotting results demonstrated that the vesicles were positive for markers of exosomes, including HSP70, TSG101, and CD63 proteins ( E). 2.2. Transcriptome Profile of Seminal Plasma Exosomes from Semen Containing Low or High Rates of Sperm with CDs The differentially expressed genes (DEGs) were identified between two groups according to the cutoff threshold of |log2FC| ≥ 1 and p < 0.05. Compared to the low rate of CD group, the seminal plasma exosomes of the high group contained 486 DEGs, of which 33 were up-regulated, and 453 were down-regulated ( A). A heatmap shows significantly different transcriptomic patterns of DEGs between the two groups ( B). These DEGs were mainly enriched in multiple pathways of interest, including cytoskeleton in muscle cells, regulation of actin cytoskeleton, ECM–receptor interaction, and axon guidance pathways ( C,D). Moreover, a small number of DEGs were enriched in the phospholipase D signaling pathway and PI3K–Akt signaling pathway ( C). It indicated that all of these differentially enriched pathways may be involved in the progression of CD shedding, but the regulation of cytoskeleton may play a crucial role. Interestingly, the down-regulation of the insulin gene INS , which was involved in the above functions, may directly affect insulin signaling ( D). Consequently, we performed ROC analyses and calculated area under the curve (AUC) of related DEGs to verify their potential. The AUCs for ITGAL , ITGB4 , FMNL1 , and INS were 0.96, 0.92, 0.96, and 0.8, respectively, indicating that these cytoskeleton-related DEGs can be used as markers to determine whether boar spermatozoa have high or low rates of residual cytoplasmic droplets ( E). 2.3. lncRNAs Profile of Seminal Plasma Exosomes from Semen Containing Low or High Rates of CDs After assembling the reads using transcript assembly software, known mRNAs and transcripts smaller than 200 bp were removed. Then, the remaining new transcripts were subjected to coding ability prediction using the prediction software CPC2 (Coding Potential Calculator) ( http://cpc2.cbi.pku.edu.cn , 4 June 2024, v2.0) and CNCI (Coding-Non-Coding Index) ( https://github.com/www-bioinfo-org/CNCI#install-cnci , 4 June 2024, v1.0). CPC and the CNCI predicted the intersection of transcripts with no coding potential as the final new predicted lncRNAs . Next, according to the cutoff threshold of |log2FC| ≥ 1 and p < 0.05, we identified 503 lncRNAs that were dysregulated in exosomes ( A,B). To elucidate the potential functions of these differentially expressed lncRNAs (DElncRNAs), we performed KEGG pathway analysis of their target genes, and the top 10 significant pathways are displayed ( C,D). As shown in D, the prolactin signaling pathway, cellular senescence, MAPK signaling pathway, and FoxO signaling pathway were significantly enriched. To further elucidate the potential relationships among the target genes, we constructed a molecular interaction network and identified the core modules of this network using the MCODE plugin in Cytoscape ( https://cytoscape.org , 1 September 2023, v3.10.1). As shown in E, we identified two hub clusters. Noteworthy, one of them is the regulatory network of the insulin gene INS . We further found that many target genes were involved in insulin-related pathways, such as the insulin signaling pathway, insulin secretion, and the FoxO signaling pathway, which indicates that insulin signaling plays a role in the regulation of CD shedding ( F,G). 2.4. Proteins Profile of Seminal Plasma Exosomes from Semen Containing Low or High Rates of Sperm with CDs The differentially expressed proteins (DEPs) were identified between two groups according to the cutoff threshold of |log2FC| ≥ 0.58 and p < 0.05. Compared to the group with a low rate of CDs, the seminal plasma exosomes of the group with a high rate contained 40 DEPs, of which 28 proteins were up-regulated, and 12 proteins were down-regulated ( A, ). Further hierarchical heatmap showed the relative expression characteristics of different proteins between the two groups ( B). These DEPs were mainly enriched in proteasome, starch and sucrose metabolism, insulin resistance, and the insulin signaling pathway ( C,D). Among them, glycogen phosphorylases PYGM and PYGB, which are essential enzymes for glycogen degradation, were significantly upregulated in the groups with a high rate of CDs ( E). It is well known that when insulin decreases, the breakdown of glycogen increases. The RNA of the insulin gene INS was significantly down-regulated in the high group, which inhibited insulin signaling. This may also be the reason for the up-regulation of PYGM and PYGB. The AUC values of PYGM and PYGB were 0.84 and 0.92, respectively, indicating that these proteins may be potential diagnostic markers for CDs residues ( F). 2.5. Integrative Analysis of Proteomics and Transcriptomics Datasets Derived from the Seminal Plasma Exosomes The multi-omics analysis results revealed no common genes among the DEGs, DEPs, and DElncRNA target genes . There are 17 common genes between DEGs and DElncRNA target genes, including the insulin gene INS ( A). We further analyzed the functional changes and found that the KEGG function of DEPs could be almost completely included by the DEGs and DElncRNA target genes, and insulin signaling pathway and axon guidance were the common functions of the three ( B). At the same time, we also found other insulin-related functions in DEGs and DElncRNA target genes, such as insulin secretion. This also indicates that insulin signal transduction plays an important role in the process of protoplasmic droplet shedding, and its mechanism needs to be further studied. We also found other common functions in DEGs and DElncRNA target genes, such as cytoskeleton and ECM–receptor interaction ( B). These functions and axon guidance may directly affect the binding of cytoplasmic droplets and sperm, which needs to be further explored. Based on the above results, we hypothesize that exosomes from seminal plasma may affect cytoplasmic droplet shedding by acting on the insulin signaling pathway and cytoskeletal regulation ( C). A schematic procedure for the study is shown in A. Boar seminal plasma was obtained from semen containing sperm with low or high rates of cytoplasmic droplets, and exosomes were isolated from the seminal plasma. The CD rates were all less than 3% in the low group and more than 14% in the high group ( B). A DIA-based proteomics strategy and high-throughput sequencing approach were used to quantitate exosome cargos, including proteins, mRNAs, and lncRNAs. To characterize the exosomes, we performed TEM, NTA, and immunoblotting. The TEM findings revealed that exosomes had the usual cup shape ( C). The NTA findings revealed that the concentration of isolated exosomes was 1.9 × 10 12 particles/mL and that exosomes were between 50 and 150 nm in diameter, which is compatible with the reported exosome size ( D). In addition, the immunoblotting results demonstrated that the vesicles were positive for markers of exosomes, including HSP70, TSG101, and CD63 proteins ( E). The differentially expressed genes (DEGs) were identified between two groups according to the cutoff threshold of |log2FC| ≥ 1 and p < 0.05. Compared to the low rate of CD group, the seminal plasma exosomes of the high group contained 486 DEGs, of which 33 were up-regulated, and 453 were down-regulated ( A). A heatmap shows significantly different transcriptomic patterns of DEGs between the two groups ( B). These DEGs were mainly enriched in multiple pathways of interest, including cytoskeleton in muscle cells, regulation of actin cytoskeleton, ECM–receptor interaction, and axon guidance pathways ( C,D). Moreover, a small number of DEGs were enriched in the phospholipase D signaling pathway and PI3K–Akt signaling pathway ( C). It indicated that all of these differentially enriched pathways may be involved in the progression of CD shedding, but the regulation of cytoskeleton may play a crucial role. Interestingly, the down-regulation of the insulin gene INS , which was involved in the above functions, may directly affect insulin signaling ( D). Consequently, we performed ROC analyses and calculated area under the curve (AUC) of related DEGs to verify their potential. The AUCs for ITGAL , ITGB4 , FMNL1 , and INS were 0.96, 0.92, 0.96, and 0.8, respectively, indicating that these cytoskeleton-related DEGs can be used as markers to determine whether boar spermatozoa have high or low rates of residual cytoplasmic droplets ( E). After assembling the reads using transcript assembly software, known mRNAs and transcripts smaller than 200 bp were removed. Then, the remaining new transcripts were subjected to coding ability prediction using the prediction software CPC2 (Coding Potential Calculator) ( http://cpc2.cbi.pku.edu.cn , 4 June 2024, v2.0) and CNCI (Coding-Non-Coding Index) ( https://github.com/www-bioinfo-org/CNCI#install-cnci , 4 June 2024, v1.0). CPC and the CNCI predicted the intersection of transcripts with no coding potential as the final new predicted lncRNAs . Next, according to the cutoff threshold of |log2FC| ≥ 1 and p < 0.05, we identified 503 lncRNAs that were dysregulated in exosomes ( A,B). To elucidate the potential functions of these differentially expressed lncRNAs (DElncRNAs), we performed KEGG pathway analysis of their target genes, and the top 10 significant pathways are displayed ( C,D). As shown in D, the prolactin signaling pathway, cellular senescence, MAPK signaling pathway, and FoxO signaling pathway were significantly enriched. To further elucidate the potential relationships among the target genes, we constructed a molecular interaction network and identified the core modules of this network using the MCODE plugin in Cytoscape ( https://cytoscape.org , 1 September 2023, v3.10.1). As shown in E, we identified two hub clusters. Noteworthy, one of them is the regulatory network of the insulin gene INS . We further found that many target genes were involved in insulin-related pathways, such as the insulin signaling pathway, insulin secretion, and the FoxO signaling pathway, which indicates that insulin signaling plays a role in the regulation of CD shedding ( F,G). The differentially expressed proteins (DEPs) were identified between two groups according to the cutoff threshold of |log2FC| ≥ 0.58 and p < 0.05. Compared to the group with a low rate of CDs, the seminal plasma exosomes of the group with a high rate contained 40 DEPs, of which 28 proteins were up-regulated, and 12 proteins were down-regulated ( A, ). Further hierarchical heatmap showed the relative expression characteristics of different proteins between the two groups ( B). These DEPs were mainly enriched in proteasome, starch and sucrose metabolism, insulin resistance, and the insulin signaling pathway ( C,D). Among them, glycogen phosphorylases PYGM and PYGB, which are essential enzymes for glycogen degradation, were significantly upregulated in the groups with a high rate of CDs ( E). It is well known that when insulin decreases, the breakdown of glycogen increases. The RNA of the insulin gene INS was significantly down-regulated in the high group, which inhibited insulin signaling. This may also be the reason for the up-regulation of PYGM and PYGB. The AUC values of PYGM and PYGB were 0.84 and 0.92, respectively, indicating that these proteins may be potential diagnostic markers for CDs residues ( F). The multi-omics analysis results revealed no common genes among the DEGs, DEPs, and DElncRNA target genes . There are 17 common genes between DEGs and DElncRNA target genes, including the insulin gene INS ( A). We further analyzed the functional changes and found that the KEGG function of DEPs could be almost completely included by the DEGs and DElncRNA target genes, and insulin signaling pathway and axon guidance were the common functions of the three ( B). At the same time, we also found other insulin-related functions in DEGs and DElncRNA target genes, such as insulin secretion. This also indicates that insulin signal transduction plays an important role in the process of protoplasmic droplet shedding, and its mechanism needs to be further studied. We also found other common functions in DEGs and DElncRNA target genes, such as cytoskeleton and ECM–receptor interaction ( B). These functions and axon guidance may directly affect the binding of cytoplasmic droplets and sperm, which needs to be further explored. Based on the above results, we hypothesize that exosomes from seminal plasma may affect cytoplasmic droplet shedding by acting on the insulin signaling pathway and cytoskeletal regulation ( C). The molecular mechanism of protoplasmic droplet shedding is still unknown. A number of studies in animals and humans have been implemented to determine the compositions and roles of semen extracellular vesicles . Recent studies have indicated that exosomes, as semen-derived extracellular vesicles, are rapidly absorbed by the sperm plasma membrane and play a crucial role in sperm structure and function . Systematic studies on the components in seminal plasma exosomes from semen containing sperm with high rates of CDs will be helpful for elucidating the functions and related regulatory mechanisms of exosomes. In the present study, we performed integrative proteomics and transcriptomics analyses to assess the landscape and the molecular signatures of exosomes from semen containing sperm with high rates of CDs and to promote a new understanding of CD shedding. Due to the source of available samples, only Duroc boars were selected as the research target in this study. Using multi-omics analysis of the components, we found that many DEG and LNC target genes are enriched in pathways such as ECM–receptor interaction and cytoskeletal regulation, with genes such as VIM and integrins ITGA5 and ITGB4 significantly down-regulated in the high residue group. The extracellular matrix (ECM) is part of the cytoskeleton. In addition to maintaining cellular morphology, the ECM interacts with cells to regulate a variety of functions, including cell junctions, adhesion, migration, and differentiation . Integrins are the main adhesion receptors to ligands of the ECM, linking the actin cytoskeleton to the ECM and enabling cells to sense matrix rigidity and mount a directional cell migration response to stiffness gradients . Along with changes in integrins, the actin cytoskeleton and cytoskeleton must also be altered, and we found the expression of cytoskeleton-related genes, such as MYLK2 , FLNC , and TCAP , were also significantly reduced in the high residue group. Although the role of the ECM and cytoskeletal recombination in spermatogenesis , capacitation and acrosomal reactions have been established . However, its role in the migration of sperm cytoplasmic droplets is not clear yet. In addition, the internal organellar membranes rotate in a vortex-like manner around the axoneme and mitochondrial sheath as the CD slides along the flagellum. In this process, they appear to alter the plasma membrane of the spermatozoa . This raises the intriguing possibility that exosomes in seminal plasma may influence the migration and shedding of sperm CDs by regulating ECM and cytoskeletal recombination. As far as we know, this is the first time this idea has been put forward, but the specific processes and modes of regulation involved in these functions need to be further explored. In addition to these results we found that the insulin signaling pathway was significantly enriched in exosomal differential proteins, mRNA, and lncRNA. Importantly, the expression of the insulin gene INS was significantly down-regulated in the high residue group, and this may lead to attenuated insulin signaling. Activation of the insulin receptor corresponds to two crucial metabolic functions, i.e., uptake of glucose and storage of glycogen . Insulin receptors are involved in the recruitment of phosphatidylinositol-3-kinase (PI3K) in the insulin signaling pathway, which in turn leads to phosphorylation and activation of the serine/threonine kinase Akt (protein kinase B) . Upon activation of Akt, intracellular vesicles containing glucose transporter protein 4 (GLUT4) are transported to the plasma membrane, thereby allowing the cell to take up glucose . Once glucose is transported into the cells, it gets phosphorylated to form G6P with the help of glucokinase (GCK) . We identified GCK , PIK3R2 , and PIK3CD as target genes for differential lncRNA in exosomes. At the same time, activation of Akt by insulin causes the phosphorylation and subsequent inhibition of GSK3β. Inactivation of GSK3β leads to the dephosphorylation of glycogen synthase (GS) and increased glycogen synthesis . On the contrary, when insulin levels are low, gluconeogenesis and glycogenolysis are stimulated to maintain the glucose levels . This may also explain the up-regulation of PYGM and PYGB expression in the high residue group. Glucose metabolism plays an important role in spermatogenesis . Glucose metabolism is necessary for the maintenance of basic cell activities and their specific characteristics, such as motility and the activity of a mature sperm that leads to fertilization . Here, we propose a new hypothesis that an imbalance in glucose metabolism caused by reduced insulin levels may affect the migration and shedding of sperm cytoplasmic droplets. Our study showed that the seminal plasma exosomes may be involved in the migration and shedding of sperm cytoplasmic droplets by acting on cytoskeleton and insulin signaling. However, these conclusions currently lack further functional validation. Further work should validate this with more detailed experiments, determine the role of exosomes on the migration and shedding of cytoplasmic droplets, and clarify the specific molecular mechanism. 4.1. Sample Collection Duroc boars aged 15–28 months were selected for this study. All boars came from the same farm (Guigang City, Guangxi Province, China) and received the same nutrition under the same feeding and management conditions. Boars are kept in controlled environmental conditions with a temperature of 20 °C to 24 °C and a relative humidity of 60%. Semen was collected by personnel, and the frequency of ejaculates was 5–7 days. All the boars on the farm were assessed for semen quality for 1 months. Boars with a CDs residue rate of less than 3% throughout the observation period and at the time of sample collection were included as the low residue group ( n = 5). Boars with a CDs residue rate of more than 14% throughout the observation period and at the time of sample collection were included as the high residue group ( n = 5). Semen samples were assessed using the CASA system (IMV Technologies, L’Aigle, France). The cytoplasmic droplet rate in this paper is the sum of the proximal and distal cytoplasmic droplet rates. The test data of all samples are shown in . After the semen sample was fully liquefied, it was centrifuged at 1500× g for 20 min at room temperature to separate the seminal plasma. The seminal plasma was frozen at −80 °C. 4.2. Isolation of Exosomes In our study, ultracentrifugation was used to isolate serum exosomes following the protocol described previously . The seminal plasma was centrifuged at 3000× g for 30 min at 4 °C to remove large cell fragments or debris. The supernatant was filtered through a 0.22-μm membrane filter, after which the filtrate was centrifuged at 100,000× g for 80 min at 4 °C. The resulting pellet was resuspended in phosphate buffer saline (PBS) on a sucrose cushion and centrifuged at 100,000× g for 2 h to pellet the exosomes. 4.3. Characterization of Exosomes The morphology of the isolated exosomes was observed using transmission electron microscopy (TEM) (Hitachi, Tokyo, Japan); the number and size of the exosomes were measured with nanoparticle tracking analysis (NTA) using a ZetaView_Particle Metrix (Particle Metrix, Inning am Ammersee, Germany). Meanwhile, specific markers for exosomes, including HSP70 (Abcam, CBD, Cambridge, UK), CD9 (Abcam, CBD, UK), and TSG101 (Beyotime, Shanghai, China), were detected by Western blot analysis. 4.4. RNA Sequencing Total RNA was extracted using Trizol reagent (Thermo Fisher Scientific, Waltham, MA, USA) following the manufacturer’s procedure. The total RNA quantity and purity were analyzed using a Bioanalyzer 2100 and RNA 6000 Nano LabChip Kit (Agilent, Santa Clara, CA, USA) with RIN number > 7.0. Then, ribosomal RNA (rRNA) was depleted from total RNA with a Ribo-Zero Gold rRNA Removal Kit (Illumina, cat. MRZG12324, San Diego, CA, USA). For preparation of RNA libraries, an NEBNext ® UltraTM RNA Library Prep Kit (NEB Cat#E7530L, Ipswich, MA, USA) for lncRNAs was used according to the manufacturer’s instructions. Paired-end sequencing (PE150) was performed for mRNAs and lncRNAs on an Illumina NovaseqTM 6000 sequence platform. Reads obtained from the sequencing machines were further filtered using Cutadapt ( https://cutadapt.readthedocs.io/en/stable/ , 19 June 2024, v4.9). The raw sequence data were submitted to the NCBI Short Read Archive (SRA) with accession number PRJNA1164465. The software DESeq2 ( https://github.com/thelovelab/DESeq2 , 4 June 2024, v1.42.0) were used for differential expression analyses of the RNA-seq raw counts. The genes, mRNAs, and lncRNAs with a parameter of p < 0.05 and absolute |log2FC| ≥ 1 were considered differentially expressed mRNAs, and lncRNAs. Differentially expressed coding RNAs were then subjected to enrichment analysis of GO functions and KEGG pathways. To further analyze the key or hub modules, we used the Molecular Complex Detection (MCODE) plugin (v1.5.1) in Cytoscape. 4.5. Bioinformatic Analysis of the RNA-Seq Data Transcripts that overlapped with known mRNAs and lncRNAs and transcripts shorter than 200 bp were filtered. Then, we utilized CPC2 ( http://cpc2.cbi.pku.edu.cn , 4 June 2024, v2.0) and CNCI ( https://github.com/www-bioinfo-org/CNCI#install-cnci , 4 June 2024, v1.0) with default parameters to predict novel transcripts with coding potential. All transcripts with a CPC score < 0.5 and CNCI score < 0 were retained and considered as novel lncRNAs. The remaining transcripts with class codes (I, j, o, u, x) were considered as novel lncRNAs. The potential target genes affected by cis-regulation were obtained by integrating the data on the differentially expressed lncRNAs and their adjacent (within 100,000 bp) mRNAs. 4.6. Proteomics Analysis A 100 µg aliquot of extracted proteins from each sample was then subjected to reduction. Then, trypsin (trypsin:protein = 1:50) was added, and the sample was incubated at 37 °C overnight. It was then desalted and lyophilized for mass spectrometry analysis. The samples were fractionated using a high pH reverse-phase fractionator and measured in DIA mode. The mass spectrometer was operated on a quadrupole Orbitrap mass spectrometer (Q Exactive HF-X, Thermo Fisher Scientific, Bremen, Germany) coupled to an EASY nLC 1200 ultra-high pressure system (Thermo Fisher Scientific) via a nano-electrospray ion source. For DIA, the acquisition method consisted of one MS1 scan (350 to 1500 m / z , resolution 60,000, maximum injection time 50 ms, and AGC target 3 × 10 6 ) and 42 segments at varying isolation windows from 14 m / z to 312 m / z (resolution 30,000, maximum injection time 54 ms, and AGC target 1 × 10 6 ). Stepped normalized collision energy was 25, 27.5, and 30. The default charge state for MS2 was set to 3. A DIA library was used to search the MS data of the single-shot samples in the Spectronaut16 (Biognosys, Zürich, Switzerland) software for final protein identification and quantitation. All searches were performed against the uniprot Sus scrofa SP proteome database (20,627 target sequences downloaded on 19 December 2023). 4.7. Statistical Analysis GraphPad Prism 8.0 was used for statistical analysis. All results are presented as the mean ± standard error of the mean (SEM). Student’s t -test was performed for comparison between two groups. Differences were considered statistically significant when the p value was <0.05. Duroc boars aged 15–28 months were selected for this study. All boars came from the same farm (Guigang City, Guangxi Province, China) and received the same nutrition under the same feeding and management conditions. Boars are kept in controlled environmental conditions with a temperature of 20 °C to 24 °C and a relative humidity of 60%. Semen was collected by personnel, and the frequency of ejaculates was 5–7 days. All the boars on the farm were assessed for semen quality for 1 months. Boars with a CDs residue rate of less than 3% throughout the observation period and at the time of sample collection were included as the low residue group ( n = 5). Boars with a CDs residue rate of more than 14% throughout the observation period and at the time of sample collection were included as the high residue group ( n = 5). Semen samples were assessed using the CASA system (IMV Technologies, L’Aigle, France). The cytoplasmic droplet rate in this paper is the sum of the proximal and distal cytoplasmic droplet rates. The test data of all samples are shown in . After the semen sample was fully liquefied, it was centrifuged at 1500× g for 20 min at room temperature to separate the seminal plasma. The seminal plasma was frozen at −80 °C. In our study, ultracentrifugation was used to isolate serum exosomes following the protocol described previously . The seminal plasma was centrifuged at 3000× g for 30 min at 4 °C to remove large cell fragments or debris. The supernatant was filtered through a 0.22-μm membrane filter, after which the filtrate was centrifuged at 100,000× g for 80 min at 4 °C. The resulting pellet was resuspended in phosphate buffer saline (PBS) on a sucrose cushion and centrifuged at 100,000× g for 2 h to pellet the exosomes. The morphology of the isolated exosomes was observed using transmission electron microscopy (TEM) (Hitachi, Tokyo, Japan); the number and size of the exosomes were measured with nanoparticle tracking analysis (NTA) using a ZetaView_Particle Metrix (Particle Metrix, Inning am Ammersee, Germany). Meanwhile, specific markers for exosomes, including HSP70 (Abcam, CBD, Cambridge, UK), CD9 (Abcam, CBD, UK), and TSG101 (Beyotime, Shanghai, China), were detected by Western blot analysis. Total RNA was extracted using Trizol reagent (Thermo Fisher Scientific, Waltham, MA, USA) following the manufacturer’s procedure. The total RNA quantity and purity were analyzed using a Bioanalyzer 2100 and RNA 6000 Nano LabChip Kit (Agilent, Santa Clara, CA, USA) with RIN number > 7.0. Then, ribosomal RNA (rRNA) was depleted from total RNA with a Ribo-Zero Gold rRNA Removal Kit (Illumina, cat. MRZG12324, San Diego, CA, USA). For preparation of RNA libraries, an NEBNext ® UltraTM RNA Library Prep Kit (NEB Cat#E7530L, Ipswich, MA, USA) for lncRNAs was used according to the manufacturer’s instructions. Paired-end sequencing (PE150) was performed for mRNAs and lncRNAs on an Illumina NovaseqTM 6000 sequence platform. Reads obtained from the sequencing machines were further filtered using Cutadapt ( https://cutadapt.readthedocs.io/en/stable/ , 19 June 2024, v4.9). The raw sequence data were submitted to the NCBI Short Read Archive (SRA) with accession number PRJNA1164465. The software DESeq2 ( https://github.com/thelovelab/DESeq2 , 4 June 2024, v1.42.0) were used for differential expression analyses of the RNA-seq raw counts. The genes, mRNAs, and lncRNAs with a parameter of p < 0.05 and absolute |log2FC| ≥ 1 were considered differentially expressed mRNAs, and lncRNAs. Differentially expressed coding RNAs were then subjected to enrichment analysis of GO functions and KEGG pathways. To further analyze the key or hub modules, we used the Molecular Complex Detection (MCODE) plugin (v1.5.1) in Cytoscape. Transcripts that overlapped with known mRNAs and lncRNAs and transcripts shorter than 200 bp were filtered. Then, we utilized CPC2 ( http://cpc2.cbi.pku.edu.cn , 4 June 2024, v2.0) and CNCI ( https://github.com/www-bioinfo-org/CNCI#install-cnci , 4 June 2024, v1.0) with default parameters to predict novel transcripts with coding potential. All transcripts with a CPC score < 0.5 and CNCI score < 0 were retained and considered as novel lncRNAs. The remaining transcripts with class codes (I, j, o, u, x) were considered as novel lncRNAs. The potential target genes affected by cis-regulation were obtained by integrating the data on the differentially expressed lncRNAs and their adjacent (within 100,000 bp) mRNAs. A 100 µg aliquot of extracted proteins from each sample was then subjected to reduction. Then, trypsin (trypsin:protein = 1:50) was added, and the sample was incubated at 37 °C overnight. It was then desalted and lyophilized for mass spectrometry analysis. The samples were fractionated using a high pH reverse-phase fractionator and measured in DIA mode. The mass spectrometer was operated on a quadrupole Orbitrap mass spectrometer (Q Exactive HF-X, Thermo Fisher Scientific, Bremen, Germany) coupled to an EASY nLC 1200 ultra-high pressure system (Thermo Fisher Scientific) via a nano-electrospray ion source. For DIA, the acquisition method consisted of one MS1 scan (350 to 1500 m / z , resolution 60,000, maximum injection time 50 ms, and AGC target 3 × 10 6 ) and 42 segments at varying isolation windows from 14 m / z to 312 m / z (resolution 30,000, maximum injection time 54 ms, and AGC target 1 × 10 6 ). Stepped normalized collision energy was 25, 27.5, and 30. The default charge state for MS2 was set to 3. A DIA library was used to search the MS data of the single-shot samples in the Spectronaut16 (Biognosys, Zürich, Switzerland) software for final protein identification and quantitation. All searches were performed against the uniprot Sus scrofa SP proteome database (20,627 target sequences downloaded on 19 December 2023). GraphPad Prism 8.0 was used for statistical analysis. All results are presented as the mean ± standard error of the mean (SEM). Student’s t -test was performed for comparison between two groups. Differences were considered statistically significant when the p value was <0.05. Our study showed that boar sperm with different levels of residual cytoplasmic droplets had different fractions of mRNA, lncRNA, and proteins in their seminal plasma exosomes. We first hypothesize that exosomes are involved in the migration and shedding of sperm cytoplasmic droplets by acting on cytoskeleton and insulin signaling. We also preliminarily screened and identified marker genes and proteins used to distinguish between high and low residual rates of cytoplasmic droplet retention. This study provides a new way to elucidate the molecular mechanism of boar sperm cytoplasmic droplet shedding and reveals a possible new role of seminal plasma exosomes in sperm. It also provides a possible new scheme for reducing the residue of sperm cytoplasmic droplets and improving the utilization rate of boar semen. |
Perceptions of Mental Health Challenges and Needs of Indonesian Adolescents: A Descriptive Qualitative Study | 621db647-1f0b-487c-859f-e4b71d22db49 | 11747947 | Health Literacy[mh] | Introduction Adolescents, defined as persons aged 10–19 years, experience a critical period of vulnerability and stress, characterised by conflicts with authority figures such as parents and teachers, peer acceptance, self‐discovery and romantic relationships (Núñez‐Regueiro and Núñez‐Regueiro ; World Health Organization ). Parental conflicts arise from adolescents' autonomy demands and emotional needs unmet by parents (Mastrotheodoros et al. ). High conflict and discord in teacher‐student relationships can negatively affect adolescents' socioemotional development (Ettekal and Shi ). Peer conflicts and bullying are significant stressors for adolescents, affecting their sense of safety and belonging (Skarstein, Helseth, and Kvarme ). Internalised societal beauty standards and external pressures contribute to negative body image and low self‐esteem in adolescents, hindering self‐discovery and fostering unrealistic ideals that undermine their sense of self‐worth (Tort‐Nasarre et al. ). Given these challenges, adolescents are at risk for developing mental health conditions. Mental health conditions (MHCs) represent a critical public health challenge for adolescents globally. Comprising 13% of the total disease burden in this age group, nearly one in seven adolescents experiences an MHC, with anxiety and depression accounting for nearly 40% of cases (World Health Organization ). Untreated MHC during adolescence can lead to dire consequences, including poor physical health, impaired social functioning, substance abuse disorders, self‐injurious behaviours and increased risk of suicide (Radez et al. ). MHCs amongst adolescents often persist into adulthood, leading to long‐term morbidity and posing a global socioeconomic burden (Radez et al. ). Adolescents worldwide are increasingly vulnerable to mental health conditions, a situation worsened by adverse socioeconomic factors, widespread stigma and discrimination and limited access to comprehensive psychosocial support services. (World Health Organization ). In Indonesia, one‐third of adolescents reported experiencing MHC symptoms in 2021 (Center for Reproductive Health et al. ). Anxiety and depression constitute 32% of MHCs in Indonesian adolescents (Center for Reproductive Health et al. ). Factors such as poor family connections, high academic pressure, conflicts related to spirituality and religiosit and exposure to adverse societal pressures on social media have been linked to poor adolescent mental well‐being in Indonesia (Willenberg et al. ). Moreover, cultural norms considerably influence Indonesian adolescents' perceptions of mental health (Brooks, Windfuhr, et al. ). Research has shown that mystical and supernatural forces are often seen as causes of mental illness in Indonesia (Marthoenis, Aichberger, and Schouler‐Ocak ), whereas stigma negatively impacts health conceptualisations and help‐seeking behaviours (Willenberg et al. ). Mental health literacy amongst Indonesian adolescents is generally poor, and the strong emphasis on religion and spirituality in Indonesian culture may contribute to reluctance to discuss mental health issues (Brooks, Prawira, et al. ). With mental health problems being considered taboo and highly stigmatised in Indonesian communities (Putri et al. ), adolescents with mental health conditions have reported struggling to access the healthcare support they need. Hence, understanding mental health support needs for adolescents in Indonesia is an urgent need. Mental health services in Indonesia are covered by the national health insurance programme (Presidential Regulation of the Republic of Indonesia ). Despite 33% of adolescents reporting psychological distress, only 2% use mental health support services (Center for Reproductive Health et al. ). This finding implied that Indonesia urgently needs to understand adolescents' needs and perceptions of mental health challenges and accordingly establish improved adolescent mental health education (Putri et al. ). Education and mental health services are compulsory for all schools through Indonesia's school health services programme (Indonesian Ministry of Education ). In Jakarta, 86.67% of public schools conduct mental health education, 53.3% implement mental health screening and 80% provide counselling (Kusumawardani et al. ). Teachers and students involved in mental health services highlight that the programme's curriculum has several issues, from limited funding, lack of teachers' training, parental awareness about mental health issues and traditional cultural beliefs and stigma around mental health (Kusumawardani et al. ). Hence, adolescents' mental health challenges and needs must be explored further to develop effective methods of managing mental health issues. Existing qualitative studies on Indonesian adolescents' mental health perceptions have been constrained by limited age ranges, narrow geographic coverage and reliance on group interviews and have not adequately identified adolescents' specific mental health needs (Brooks, Windfuhr, et al. ; Willenberg et al. ). Brooks, Windfuhr, et al. provided valuable insights into adolescents' conceptualisation of mental health by including healthy individuals and those diagnosed with mental health conditions. Whilst this inclusive approach offered a broad understanding of mental health beliefs across diverse mental health statuses, it may have obscured the unique perspectives of healthy adolescents. These individuals might conceptualise mental health differently from those with diagnosed conditions. Similarly, Willenberg et al. examined adolescents' conceptualisation of mental health and its determinants, incorporating school‐attending and non‐school‐attending individuals in focus group discussions. However, this approach may have restricted the exploration of individual experiences, particularly regarding how school‐related factors impact adolescents' views on mental health. Moreover, the use of focus groups in this study, though beneficial for facilitating collective dialogue, may have limited the depth of individualised experiences, especially concerning personal reflections on mental health. To address these research gaps, the present study aims to understand adolescents' perspectives on mental health challenges and needs, taking into account Indonesia's diverse cultural and social factors. It also considers the taboo nature of mental health discussions in Asian contexts (Cogan et al. ). By employing individual interviews and school‐attending adolescents, this study seeks to provide a nuanced understanding of mental health perceptions amongst Indonesian adolescents. 1.1 Aim and Research Question This study aimed to explore and understand Indonesian adolescents' perceptions of their mental health challenges and needs. The central research question was: What are Indonesian adolescents' views on their mental health challenges and support needs? Aim and Research Question This study aimed to explore and understand Indonesian adolescents' perceptions of their mental health challenges and needs. The central research question was: What are Indonesian adolescents' views on their mental health challenges and support needs? Methods 2.1 Research Design This study utilised a descriptive qualitative design and is part of a larger ongoing mixed‐methods research project. While the quantitative component focuses on factors influencing adolescents' mental health literacy and overall psychological well‐being, its findings will be reported separately (Yani et al. ). The qualitative aspect, as outlined by Sandelowski , aimed to explore comprehensive insights into Indonesian adolescents' mental health challenges, perspectives and support needs, ultimately ensuring their psychological well‐being. This study provides a deeper understanding of how Indonesian adolescents view mental health and seek help. The study was presented per the consolidated criteria for reporting qualitative research studies (COREQ) checklist (Tong et al. ). 2.2 Context A total of 615 adolescents were recruited from four public schools in West Java Province, Indonesia, for the quantitative study and 14 participated in this qualitative study. West Java Province is Indonesia's most populous province, home to approximately 49 million people and the population is predominantly Sundanese and Muslim (BPS‐Statistics of West Java Province ). 2.3 Participants Adolescent participants were purposefully selected according to the following criteria: (1) aged 13–19 years, this age range was chosen because metacognitive abilities refine up to age 13 (Geurten, Meulemans, and Lemaire ); (2) enrolled in an Indonesian public high school; (3) fluent in English or Bahasa; and (4) have varied mean mental health literacy scores ranging from 0 to 13 as measured by the mental health literacy scale (Carr et al. ; Kaligis et al. ). The Mental Health Literacy Scale comprised 13 items and higher scores indicating better mental health literacy (Carr et al. ; Kaligis et al. ). Participants were not screened for or excluded based on existing mental health conditions. Adolescents with cognitive, auditory or visual impairments, along with those having medical conditions that might hinder their ability to fully participate in the study, were excluded. Data saturation is the point at which no new information emerges from the interviews (Guest, Bunce, and Johnson ) and was reached after interviewing 12 participants. Two additional interviews were performed to confirm the findings. Transcriptions were completed promptly after each interview, allowing for concurrent analysis and enabling the researchers to identify emerging patterns and themes during the data collection process. Data collection was concluded once no new themes or insights were identified and data saturation was achieved from the additional data (Saunders et al. ). 2.4 Data Collection The study was conducted at two public schools (one junior high school and one senior high school) in West Java Province, Indonesia. After obtaining ethics approval, teachers at these schools, both females, screened and referred adolescents meeting the eligibility criteria to the research team. The primary researcher (D.I.Y.), a female PhD student registered nurse with no direct influence over adolescents' education, approached potential school participants. The primary researcher received training in qualitative interviewing techniques from an experienced qualitative researcher (S.S.). The study details were explained to adolescents and parents using a Participant Information Sheet. The parents' voluntary informed consent and adolescents' assent were obtained using an online form shared via a secure link. Sociodemographic details, e.g. age and gender, were collected online from the participants before the interviews. One‐on‐one, face‐to‐face interviews were conducted in one of the school's private meeting rooms to provide an uninterrupted and conducive environment for the interviews (Bolderston ). A semi‐structured interview guide with open‐ended questions, supplemented by follow‐up and probing questions, was developed based on the literature (Meldahl et al. ; Soria‐Martínez et al. ) and pilot‐tested before the actual interviews. Adolescents were asked questions about mental health challenges, knowledge and experiences with depression and anxiety, as well as perceived mental health needs. The interview guide used for this study is appended in Appendix . The open‐ended questions guided the main flow (Doody and Noonan ; McGrath, Palmgren, and Liljedahl ), and the follow‐up probing questions allowed the researchers to seek clarity and further insights (DeJonckheere and Vaughn ). Data collection occurred in December 2023. All the interviews were conducted in Bahasa and audio‐recorded with the participant's permission, averaging 36 min. The bilingual (fluent in speaking and writing English and Bahasa Indonesia) primary researcher and research assistants transcribed all the recordings verbatim in Bahasa and then back‐translated to English (Chen and Boore ). The field notes of nonverbal responses and expressions were documented in English after each interview and were analysed with the interviews. All the interviews were conducted by the same primary researcher (D.I.Y.). 2.5 Data Analysis All analyses were conducted using English transcripts and field notes. The data analysis followed a six‐phase thematic analysis approach: (1) data familiarisation, (2) initial coding, (3) searching for themes, (4) reviewing themes, (5) defining and naming themes, and (6) producing a report (Braun and Clarke ). All transcripts and field notes were read multiple times to gain familiarity with the collected data. An initial set of codes was then generated using a manual colour‐coding system to categorise related segments, which were subsequently organised into overarching themes and subthemes. The primary researcher (D.I.Y.) and co‐researcher (J.Y.X.C.) independently coded all transcripts. They then meticulously compared their codes, and codes with the same meaning were tallied. Approximately 87% of the codes were similar between the two authors. Extensive discussions occurred amongst all authors to compare analyses and finalise the themes. Any conflicts were resolved by consulting the senior researcher (S.S). This process of multiple analyst triangulation between the authors helped ensure the credibility of the analytical process. 2.6 Rigour To maintain trustworthiness, this study ensured credibility, dependability, transferability and confirmability (Lincoln and Guba ; Nowell et al. ). Credibility was established by thoroughly and repeatedly examining the transcripts to confirm that the resulting themes accurately reflected adolescents' perceptions of their mental health needs (Lincoln and Guba ; Nowell et al. ). To further enhance credibility, regular peer debriefing sessions were conducted. During these sessions, emerging themes and interpretations were reviewed by research team members not directly involved in data collection, with assumptions being challenged and fresh perspectives on the analysis being provided (Lincoln and Guba ; Nowell et al. ). Transferability was enhanced by providing detailed, vivid descriptions of adolescents' experiences using verbatim interview quotes (Lincoln and Guba ; Nowell et al. ). An audit trail was created to ensure confirmability and dependability by retaining copies of all transcripts (in Bahasa and English), field notes, author reflections and supporting documentation (Lincoln and Guba ; Nowell et al. ). Furthermore, all authors kept detailed personal reflective journals throughout the analysis to strengthen reflexivity and ensure the findings' authenticity (Barrett, Kajamaa, and Johnston ; Buetow ). In these journals, thoughts, feelings and potential biases related to the study were documented (Buetow ). Regular team meetings were held to discuss these reflections, allowing for a critical examination of how personal experiences and perspectives might influence data interpretation and for adjustments to be made to the approach accordingly (Nowell et al. ). 2.7 Ethical Consideration The study received ethics approval from the Universitas Padjadjaran Institutional Review Board (Reference number: 1405/UN6.KEP/EC/2023), located in Bandung, West Java Province, Indonesia. Voluntary participation was reinforced during study recruitment, and each adolescent was assigned a unique code number to protect their identity and ensure the anonymity and confidentiality of the collected data. The consent and demographic forms were securely stored on a standalone password‐protected computer, and the audio recordings and transcripts were kept on a separate computer. Research Design This study utilised a descriptive qualitative design and is part of a larger ongoing mixed‐methods research project. While the quantitative component focuses on factors influencing adolescents' mental health literacy and overall psychological well‐being, its findings will be reported separately (Yani et al. ). The qualitative aspect, as outlined by Sandelowski , aimed to explore comprehensive insights into Indonesian adolescents' mental health challenges, perspectives and support needs, ultimately ensuring their psychological well‐being. This study provides a deeper understanding of how Indonesian adolescents view mental health and seek help. The study was presented per the consolidated criteria for reporting qualitative research studies (COREQ) checklist (Tong et al. ). Context A total of 615 adolescents were recruited from four public schools in West Java Province, Indonesia, for the quantitative study and 14 participated in this qualitative study. West Java Province is Indonesia's most populous province, home to approximately 49 million people and the population is predominantly Sundanese and Muslim (BPS‐Statistics of West Java Province ). Participants Adolescent participants were purposefully selected according to the following criteria: (1) aged 13–19 years, this age range was chosen because metacognitive abilities refine up to age 13 (Geurten, Meulemans, and Lemaire ); (2) enrolled in an Indonesian public high school; (3) fluent in English or Bahasa; and (4) have varied mean mental health literacy scores ranging from 0 to 13 as measured by the mental health literacy scale (Carr et al. ; Kaligis et al. ). The Mental Health Literacy Scale comprised 13 items and higher scores indicating better mental health literacy (Carr et al. ; Kaligis et al. ). Participants were not screened for or excluded based on existing mental health conditions. Adolescents with cognitive, auditory or visual impairments, along with those having medical conditions that might hinder their ability to fully participate in the study, were excluded. Data saturation is the point at which no new information emerges from the interviews (Guest, Bunce, and Johnson ) and was reached after interviewing 12 participants. Two additional interviews were performed to confirm the findings. Transcriptions were completed promptly after each interview, allowing for concurrent analysis and enabling the researchers to identify emerging patterns and themes during the data collection process. Data collection was concluded once no new themes or insights were identified and data saturation was achieved from the additional data (Saunders et al. ). Data Collection The study was conducted at two public schools (one junior high school and one senior high school) in West Java Province, Indonesia. After obtaining ethics approval, teachers at these schools, both females, screened and referred adolescents meeting the eligibility criteria to the research team. The primary researcher (D.I.Y.), a female PhD student registered nurse with no direct influence over adolescents' education, approached potential school participants. The primary researcher received training in qualitative interviewing techniques from an experienced qualitative researcher (S.S.). The study details were explained to adolescents and parents using a Participant Information Sheet. The parents' voluntary informed consent and adolescents' assent were obtained using an online form shared via a secure link. Sociodemographic details, e.g. age and gender, were collected online from the participants before the interviews. One‐on‐one, face‐to‐face interviews were conducted in one of the school's private meeting rooms to provide an uninterrupted and conducive environment for the interviews (Bolderston ). A semi‐structured interview guide with open‐ended questions, supplemented by follow‐up and probing questions, was developed based on the literature (Meldahl et al. ; Soria‐Martínez et al. ) and pilot‐tested before the actual interviews. Adolescents were asked questions about mental health challenges, knowledge and experiences with depression and anxiety, as well as perceived mental health needs. The interview guide used for this study is appended in Appendix . The open‐ended questions guided the main flow (Doody and Noonan ; McGrath, Palmgren, and Liljedahl ), and the follow‐up probing questions allowed the researchers to seek clarity and further insights (DeJonckheere and Vaughn ). Data collection occurred in December 2023. All the interviews were conducted in Bahasa and audio‐recorded with the participant's permission, averaging 36 min. The bilingual (fluent in speaking and writing English and Bahasa Indonesia) primary researcher and research assistants transcribed all the recordings verbatim in Bahasa and then back‐translated to English (Chen and Boore ). The field notes of nonverbal responses and expressions were documented in English after each interview and were analysed with the interviews. All the interviews were conducted by the same primary researcher (D.I.Y.). Data Analysis All analyses were conducted using English transcripts and field notes. The data analysis followed a six‐phase thematic analysis approach: (1) data familiarisation, (2) initial coding, (3) searching for themes, (4) reviewing themes, (5) defining and naming themes, and (6) producing a report (Braun and Clarke ). All transcripts and field notes were read multiple times to gain familiarity with the collected data. An initial set of codes was then generated using a manual colour‐coding system to categorise related segments, which were subsequently organised into overarching themes and subthemes. The primary researcher (D.I.Y.) and co‐researcher (J.Y.X.C.) independently coded all transcripts. They then meticulously compared their codes, and codes with the same meaning were tallied. Approximately 87% of the codes were similar between the two authors. Extensive discussions occurred amongst all authors to compare analyses and finalise the themes. Any conflicts were resolved by consulting the senior researcher (S.S). This process of multiple analyst triangulation between the authors helped ensure the credibility of the analytical process. Rigour To maintain trustworthiness, this study ensured credibility, dependability, transferability and confirmability (Lincoln and Guba ; Nowell et al. ). Credibility was established by thoroughly and repeatedly examining the transcripts to confirm that the resulting themes accurately reflected adolescents' perceptions of their mental health needs (Lincoln and Guba ; Nowell et al. ). To further enhance credibility, regular peer debriefing sessions were conducted. During these sessions, emerging themes and interpretations were reviewed by research team members not directly involved in data collection, with assumptions being challenged and fresh perspectives on the analysis being provided (Lincoln and Guba ; Nowell et al. ). Transferability was enhanced by providing detailed, vivid descriptions of adolescents' experiences using verbatim interview quotes (Lincoln and Guba ; Nowell et al. ). An audit trail was created to ensure confirmability and dependability by retaining copies of all transcripts (in Bahasa and English), field notes, author reflections and supporting documentation (Lincoln and Guba ; Nowell et al. ). Furthermore, all authors kept detailed personal reflective journals throughout the analysis to strengthen reflexivity and ensure the findings' authenticity (Barrett, Kajamaa, and Johnston ; Buetow ). In these journals, thoughts, feelings and potential biases related to the study were documented (Buetow ). Regular team meetings were held to discuss these reflections, allowing for a critical examination of how personal experiences and perspectives might influence data interpretation and for adjustments to be made to the approach accordingly (Nowell et al. ). Ethical Consideration The study received ethics approval from the Universitas Padjadjaran Institutional Review Board (Reference number: 1405/UN6.KEP/EC/2023), located in Bandung, West Java Province, Indonesia. Voluntary participation was reinforced during study recruitment, and each adolescent was assigned a unique code number to protect their identity and ensure the anonymity and confidentiality of the collected data. The consent and demographic forms were securely stored on a standalone password‐protected computer, and the audio recordings and transcripts were kept on a separate computer. Results 3.1 Characteristics of the Participants The study included 14 adolescents (mean age = 14.43 years, SD = 1.22). Of these, eight (57%) were male, and six (43%) were female. Nine (64%) attended junior high school, whereas five (36%) were in senior high school. Most participants were Muslim ( n = 12, 86%) and of Sundanese ethnicity ( n = 12, 86%, Table ). The range of mental health scores of adolescents who participated in the interviews was 0–7, out of a possible range of 0–13, and the mean score was 3.93 (SD = 2.13). Adolescents' perceptions of mental health and mental health needs were consolidated into three themes and eight subthemes (Figure and Table ). The main themes were: (1) Transitioning to adulthood: journeys through emotional turmoil and societal expectations; (2) Navigating challenges: Diverse adolescent responses; and (3) Breaking the silence: empowering adolescents through comprehensive mental health education and support (Figure ). 3.2 Transitioning to Adulthood: Journeys Through Emotional Turmoil and Societal Expectations This theme highlighted the mental health challenges faced by adolescents as they transitioned into adulthood and was supported by three subthemes: emotional turmoil; growing pains; aspirations, motivations and desires. 3.3 Emotional Turmoil Many adolescents reported struggling with varied stressors and emotions in their lives, ranging from difficulties in controlling their emotions to experiencing anger outbursts and low moods without any apparent reason. A few female adolescents attributed their ‘mood swings’ to the hormonal fluctuations experienced during their menstruation. I can say I get angry easily too, and it's hard to control myself… when it's my period, the mood is always messed up. (Junior High Student 2) Some adolescents also felt self‐conscious about their outlook and afraid of being judged negatively by their peers. A few of those who were being bullied and victimised at school felt ‘stressed’ when they had no friends to ‘trust’. Some adolescents who were exploring romantic relationships with the opposite gender felt ‘worthless’ and insecure when the relationships did not work out well. When someone doesn't like me, I keep thinking about what's lacking in me. So I keep thinking and end up feeling stressed, like constantly introspecting until it becomes overwhelming. (Senior High Student 5) Family conflicts also contribute to adolescents' psychological distress. Many felt unsupported by their parents and ‘hurt’ when their parents reprimanded them ‘harshly’. Moreover, adolescents felt distressed whenever they thought about their past traumatic experiences (e.g., being sexually harassed in school) or worried about future adverse events that could happen (e.g., parents undergoing a divorce). I tend to imagine bad things that haven't happened yet, like for example, if there's an event and I imagine what if it fails… when I was in elementary school, I experienced sexual harassment… maybe every time there's a guy, I might imagine… that incident. (Senior High Student 2) 3.4 Growing Pains Some adolescents experience anxiety and stress related to school factors such as exams and academic pressure, as well as from outside activities such as sports club competitions. They felt ‘nervous’ before an exam and were ‘afraid of failure’. Like nervousness before exams, before performing, it's panic, fear… It's somewhat alleviated, but still a bit nervous but not too much… just regular anxiety. (Junior High Student 6) Adolescents also struggle with self‐esteem and body image issues. Some felt ‘shy’, ‘weak’, or labelled themselves as ‘nerds’ due to their appearance, whereas others struggled with weight concerns. These negative self‐perceptions were influenced by physical characteristics and past traumatic experiences, which had a lasting influence on self‐image. Maybe because I was a nerd, yeah, maybe because I wore glasses, they thought I was a nerd, weak, you know. Usually, others see me as weak. (Junior High Student 9) Moreover, adolescents struggle with a conflict between their desire for independence and self‐exploration and the limitations imposed by their parents. They believe it is better to ‘rely on themselves’ and not excessively on their parents. Furthermore, some adolescents have been encouraged from early life to be independent—learning, studying and reading alone. They [parents] just don't let me. Oh, yeah, they don't let me play carelessly. Going out randomly…my life is very controlled. (Junior High Student 1) 3.5 Aspirations, Motivations and Desires in Adolescence Adolescents aspire to a fulfilling career and meaningful life, with goals ranging from talent‐based careers (e.g., famous football player) to business careers for financial security (e.g., CEO of a company). However, these aspirations often differed from parental expectations. Some adolescents reluctantly gave up their dreams to meet family expectations, whereas others attempted to balance their aspirations with their parent's wishes. A few changed their minds, aligning more closely with parental views. Furthermore, some adolescents remained uncertain about their future goals and career paths. Beyond career aspirations, adolescents also valued personal fulfilment. They expressed desires for meaningful relationships, including close ‘friendships’ characterised by mutual support, trust and understanding. They also sought a sense of belonging and comfort in their social environments. Oh, my mom wants the best, I have to be the best for my mom, for my dad, so I'm pressuring myself, I have to be the best, I can't just be mediocre, I can't just be like that, you know. (Junior High Student 8) 3.6 Navigating Challenges: Diverse Adolescent Responses This theme presented adolescents' two distinctly different ways of coping with the mental health challenges they faced using two subthemes: into the shadows, which covers adolescents' maladaptive coping strategies, and balancing act, which covers adolescents' coping strategies. 3.7 Into the Shadows: Adolescents Maladaptive Coping Strategies Some adolescents had problems coping with life's stressors and they adopted maladaptive coping techniques. They avoided conflicts with friends and family members by ‘distancing’ themselves from their loved ones, staying ‘silent’ or ‘crying alone’. These adolescents were aware of risky behaviours, such as smoking, drinking and drug use, recognising their harmful nature. Just accepted it [the uncomfortable situation]. Lazy to make trouble. Just don't like having problems…just distance myself. (Junior High Student 4) A small number of adolescents experienced severe emotional distress, manifesting in various concerning behaviours. A few reported having suicidal thoughts, whereas others engaged in self‐harm using sharp tools, although without intent to die. These individuals often continued the self‐harming behaviour, experiencing an emotional release during the act, particularly when seeing blood. This troubling pattern provided a misguided sense of comfort or relief from their distress. The behaviour extended to social media interactions, with adolescents posting about their self‐harm and receiving responses from friends. In one extreme case, an individual engaged in reckless behaviour, walking carelessly on busy roads and risking her life. … I just needed peace. I just cry alone… I even used to self‐harm… When I felt upset… I self‐harmed again… As for me, when I post SW (status WhatsApp) but I like it… I often do that… help me to get a friend's attention. (Junior High Student 7) 3.8 Balancing Act: Adolescents' Coping Strategies Some adolescents employed healthy coping mechanisms to manage stress and maintain emotional balance. They engaged in various self‐chosen leisure activities, such as ‘reading’, ‘sports’, ‘listening to music’, ‘games’ and ‘drawing’. These activities served multiple purposes by providing a psychological distraction from stressors, bringing joy and promoting calmness. By engaging in these enjoyable pursuits, adolescents found stress relief, experienced positive emotions and gained control over their leisure time. Furthermore, some incorporated spiritual practices into their routines as coping mechanisms, such as ‘praying’ and ‘reciting’ the Quran. Yes, it helps… especially prayer, there was a time when I was crying and everything, it was really calming… (Senior High Student 1) Social support emerged as crucial, with many adolescents relying on friends and family for emotional support and companionship. Adolescents turned to their friends or family to share their stress and concerns. These supportive individuals played a vital role by listening attentively, offering understanding and providing encouragement. Friends and family members ‘cheered’ and ‘encouraged’ them, boosting their morale and helping them face their difficulties with renewed strength. They [close friends] cheer me up, we encourage each other. What is it, words like don't give up on exams, don't be… I don't know, just stay motivated. (Junior High Student 5) 3.9 Breaking the Silence: Empowering Adolescents Through Comprehensive Mental Health Education and Support The theme highlighted the need to empower adolescents with comprehensive mental health education and support and was supported by three subthemes: fostering mental health literacy, building bridges between parents and adolescents, creating a culture of kindness, and fostering kindness via peer support. 3.10 Fostering Mental Health Literacy Adolescents showed limited mental health literacy but expressed eagerness to learn more. They believed that their parents and teachers also needed ‘awareness’ of mental health, as mental health issues were often misunderstood or dismissed. To improve school mental health programmes, adolescents suggested incorporating engaging activities to catch their attention and create a safe environment. They emphasised using storytelling, ‘games’, and ‘icebreakers’ to promote understanding and open communication about mental health. Given the potentially sensitive nature of mental health, these interactive approaches would help make the topic approachable and memorable, allowing adolescents to engage with the material in a relaxed and comfortable setting. Just keep it relaxed, with icebreakers. Icebreakers are good… explain about mental health. (Senior High Student 4) Adolescents preferred programmes led by experienced mental health professionals with credible expertise and experience in mental health. They favoured ‘face‐to‐face’ interactions as they found in‐person interactions more ‘comfortable’ and ‘honest’. They also desired individual sessions for personal issues and large peer‐group activities to encourage engagement and shared understanding for general mental health literacy education. Because if it's in‐person, it's more comfortable, like if you want to cry, there's someone to hug or comfort you, so it's nicer… can share personal struggles. (Junior High Student 2) Actually, it's better if it's directly from the specialised healthcare workers. Because they are definitely trained and educated. sharing things in a group setting will help us to be less shy to understand mental health openly. (Senior High Student 2) 3.11 Building Bridges Between Parents and Adolescents Adolescents craved a deep connection with their parents. They longed to express their emotions openly without fear of being ‘scolded’, facing ‘anger’, or enduring ‘lectures’. Many adolescents wanted emotional support from their parents during challenging times, desiring understanding rather than criticism. Some adolescents hesitated to ‘joke’ or be light‐hearted around their parents due to unpredictable tempers or a lack of ‘quality time’. It's like we rarely meet and talk. Parents, I just want to hang out [with them] but they rarely have time. (Junior High School 4) Adolescents expressed a desire for their parents to be involved in mental health programmes, seeing these as ways to improve family connections. They suggested programmes that could teach parents about adolescent mental health, helping parents to ‘understand’ them well. These adolescents also recognised their need for education on communicating effectively with their parents. They believed parents could offer meaningful ‘support’ and ‘encouragement’ with both sides being well‐informed. However, some adolescents worried that increased parental involvement might initially cause ‘stress’ or discomfort, especially in families where communication was already challenging. …it's better to be open to our parents… so that our mental health doesn't deteriorate too much or something like that, let the teacher handle it along with the parents to help us better… (Junior High Student 7) …but with parents involved. Just so my parents understand my condition because I never talk about it, even when I'm sick I never say anything. (Senior High School 1) 3.12 Creating a Culture of Kindness and Fostering Kindness via Peer Support Adolescents wanted schools to become positive ‘environments’ where differences were embraced and mutual respect was the norm. To achieve this, they suggested improved school programmes to address bullying directly, ‘guidelines for witnesses’ and ‘consequences’ for those who bully others and strict enforcement of these anti‐bullying measures. Adolescents want to learn to be empathetic and caring towards their peers. They emphasised the need for education to promote understanding and compassion rather than ‘jealousy’ or ‘superiority’ and strengthen communication skills and supportive friendships. They wanted to embrace and respect differences, viewing this as a key approach to prevent bullying and create an inclusive environment. We should respect each other and know that there are limits to joking around and other things, not everything can be joked about. There's a limit to when we can joke or laugh at someone [with mental health issues]. (Junior High Student 5) Characteristics of the Participants The study included 14 adolescents (mean age = 14.43 years, SD = 1.22). Of these, eight (57%) were male, and six (43%) were female. Nine (64%) attended junior high school, whereas five (36%) were in senior high school. Most participants were Muslim ( n = 12, 86%) and of Sundanese ethnicity ( n = 12, 86%, Table ). The range of mental health scores of adolescents who participated in the interviews was 0–7, out of a possible range of 0–13, and the mean score was 3.93 (SD = 2.13). Adolescents' perceptions of mental health and mental health needs were consolidated into three themes and eight subthemes (Figure and Table ). The main themes were: (1) Transitioning to adulthood: journeys through emotional turmoil and societal expectations; (2) Navigating challenges: Diverse adolescent responses; and (3) Breaking the silence: empowering adolescents through comprehensive mental health education and support (Figure ). Transitioning to Adulthood: Journeys Through Emotional Turmoil and Societal Expectations This theme highlighted the mental health challenges faced by adolescents as they transitioned into adulthood and was supported by three subthemes: emotional turmoil; growing pains; aspirations, motivations and desires. Emotional Turmoil Many adolescents reported struggling with varied stressors and emotions in their lives, ranging from difficulties in controlling their emotions to experiencing anger outbursts and low moods without any apparent reason. A few female adolescents attributed their ‘mood swings’ to the hormonal fluctuations experienced during their menstruation. I can say I get angry easily too, and it's hard to control myself… when it's my period, the mood is always messed up. (Junior High Student 2) Some adolescents also felt self‐conscious about their outlook and afraid of being judged negatively by their peers. A few of those who were being bullied and victimised at school felt ‘stressed’ when they had no friends to ‘trust’. Some adolescents who were exploring romantic relationships with the opposite gender felt ‘worthless’ and insecure when the relationships did not work out well. When someone doesn't like me, I keep thinking about what's lacking in me. So I keep thinking and end up feeling stressed, like constantly introspecting until it becomes overwhelming. (Senior High Student 5) Family conflicts also contribute to adolescents' psychological distress. Many felt unsupported by their parents and ‘hurt’ when their parents reprimanded them ‘harshly’. Moreover, adolescents felt distressed whenever they thought about their past traumatic experiences (e.g., being sexually harassed in school) or worried about future adverse events that could happen (e.g., parents undergoing a divorce). I tend to imagine bad things that haven't happened yet, like for example, if there's an event and I imagine what if it fails… when I was in elementary school, I experienced sexual harassment… maybe every time there's a guy, I might imagine… that incident. (Senior High Student 2) Growing Pains Some adolescents experience anxiety and stress related to school factors such as exams and academic pressure, as well as from outside activities such as sports club competitions. They felt ‘nervous’ before an exam and were ‘afraid of failure’. Like nervousness before exams, before performing, it's panic, fear… It's somewhat alleviated, but still a bit nervous but not too much… just regular anxiety. (Junior High Student 6) Adolescents also struggle with self‐esteem and body image issues. Some felt ‘shy’, ‘weak’, or labelled themselves as ‘nerds’ due to their appearance, whereas others struggled with weight concerns. These negative self‐perceptions were influenced by physical characteristics and past traumatic experiences, which had a lasting influence on self‐image. Maybe because I was a nerd, yeah, maybe because I wore glasses, they thought I was a nerd, weak, you know. Usually, others see me as weak. (Junior High Student 9) Moreover, adolescents struggle with a conflict between their desire for independence and self‐exploration and the limitations imposed by their parents. They believe it is better to ‘rely on themselves’ and not excessively on their parents. Furthermore, some adolescents have been encouraged from early life to be independent—learning, studying and reading alone. They [parents] just don't let me. Oh, yeah, they don't let me play carelessly. Going out randomly…my life is very controlled. (Junior High Student 1) Aspirations, Motivations and Desires in Adolescence Adolescents aspire to a fulfilling career and meaningful life, with goals ranging from talent‐based careers (e.g., famous football player) to business careers for financial security (e.g., CEO of a company). However, these aspirations often differed from parental expectations. Some adolescents reluctantly gave up their dreams to meet family expectations, whereas others attempted to balance their aspirations with their parent's wishes. A few changed their minds, aligning more closely with parental views. Furthermore, some adolescents remained uncertain about their future goals and career paths. Beyond career aspirations, adolescents also valued personal fulfilment. They expressed desires for meaningful relationships, including close ‘friendships’ characterised by mutual support, trust and understanding. They also sought a sense of belonging and comfort in their social environments. Oh, my mom wants the best, I have to be the best for my mom, for my dad, so I'm pressuring myself, I have to be the best, I can't just be mediocre, I can't just be like that, you know. (Junior High Student 8) Navigating Challenges: Diverse Adolescent Responses This theme presented adolescents' two distinctly different ways of coping with the mental health challenges they faced using two subthemes: into the shadows, which covers adolescents' maladaptive coping strategies, and balancing act, which covers adolescents' coping strategies. Into the Shadows: Adolescents Maladaptive Coping Strategies Some adolescents had problems coping with life's stressors and they adopted maladaptive coping techniques. They avoided conflicts with friends and family members by ‘distancing’ themselves from their loved ones, staying ‘silent’ or ‘crying alone’. These adolescents were aware of risky behaviours, such as smoking, drinking and drug use, recognising their harmful nature. Just accepted it [the uncomfortable situation]. Lazy to make trouble. Just don't like having problems…just distance myself. (Junior High Student 4) A small number of adolescents experienced severe emotional distress, manifesting in various concerning behaviours. A few reported having suicidal thoughts, whereas others engaged in self‐harm using sharp tools, although without intent to die. These individuals often continued the self‐harming behaviour, experiencing an emotional release during the act, particularly when seeing blood. This troubling pattern provided a misguided sense of comfort or relief from their distress. The behaviour extended to social media interactions, with adolescents posting about their self‐harm and receiving responses from friends. In one extreme case, an individual engaged in reckless behaviour, walking carelessly on busy roads and risking her life. … I just needed peace. I just cry alone… I even used to self‐harm… When I felt upset… I self‐harmed again… As for me, when I post SW (status WhatsApp) but I like it… I often do that… help me to get a friend's attention. (Junior High Student 7) Balancing Act: Adolescents' Coping Strategies Some adolescents employed healthy coping mechanisms to manage stress and maintain emotional balance. They engaged in various self‐chosen leisure activities, such as ‘reading’, ‘sports’, ‘listening to music’, ‘games’ and ‘drawing’. These activities served multiple purposes by providing a psychological distraction from stressors, bringing joy and promoting calmness. By engaging in these enjoyable pursuits, adolescents found stress relief, experienced positive emotions and gained control over their leisure time. Furthermore, some incorporated spiritual practices into their routines as coping mechanisms, such as ‘praying’ and ‘reciting’ the Quran. Yes, it helps… especially prayer, there was a time when I was crying and everything, it was really calming… (Senior High Student 1) Social support emerged as crucial, with many adolescents relying on friends and family for emotional support and companionship. Adolescents turned to their friends or family to share their stress and concerns. These supportive individuals played a vital role by listening attentively, offering understanding and providing encouragement. Friends and family members ‘cheered’ and ‘encouraged’ them, boosting their morale and helping them face their difficulties with renewed strength. They [close friends] cheer me up, we encourage each other. What is it, words like don't give up on exams, don't be… I don't know, just stay motivated. (Junior High Student 5) Breaking the Silence: Empowering Adolescents Through Comprehensive Mental Health Education and Support The theme highlighted the need to empower adolescents with comprehensive mental health education and support and was supported by three subthemes: fostering mental health literacy, building bridges between parents and adolescents, creating a culture of kindness, and fostering kindness via peer support. Fostering Mental Health Literacy Adolescents showed limited mental health literacy but expressed eagerness to learn more. They believed that their parents and teachers also needed ‘awareness’ of mental health, as mental health issues were often misunderstood or dismissed. To improve school mental health programmes, adolescents suggested incorporating engaging activities to catch their attention and create a safe environment. They emphasised using storytelling, ‘games’, and ‘icebreakers’ to promote understanding and open communication about mental health. Given the potentially sensitive nature of mental health, these interactive approaches would help make the topic approachable and memorable, allowing adolescents to engage with the material in a relaxed and comfortable setting. Just keep it relaxed, with icebreakers. Icebreakers are good… explain about mental health. (Senior High Student 4) Adolescents preferred programmes led by experienced mental health professionals with credible expertise and experience in mental health. They favoured ‘face‐to‐face’ interactions as they found in‐person interactions more ‘comfortable’ and ‘honest’. They also desired individual sessions for personal issues and large peer‐group activities to encourage engagement and shared understanding for general mental health literacy education. Because if it's in‐person, it's more comfortable, like if you want to cry, there's someone to hug or comfort you, so it's nicer… can share personal struggles. (Junior High Student 2) Actually, it's better if it's directly from the specialised healthcare workers. Because they are definitely trained and educated. sharing things in a group setting will help us to be less shy to understand mental health openly. (Senior High Student 2) Building Bridges Between Parents and Adolescents Adolescents craved a deep connection with their parents. They longed to express their emotions openly without fear of being ‘scolded’, facing ‘anger’, or enduring ‘lectures’. Many adolescents wanted emotional support from their parents during challenging times, desiring understanding rather than criticism. Some adolescents hesitated to ‘joke’ or be light‐hearted around their parents due to unpredictable tempers or a lack of ‘quality time’. It's like we rarely meet and talk. Parents, I just want to hang out [with them] but they rarely have time. (Junior High School 4) Adolescents expressed a desire for their parents to be involved in mental health programmes, seeing these as ways to improve family connections. They suggested programmes that could teach parents about adolescent mental health, helping parents to ‘understand’ them well. These adolescents also recognised their need for education on communicating effectively with their parents. They believed parents could offer meaningful ‘support’ and ‘encouragement’ with both sides being well‐informed. However, some adolescents worried that increased parental involvement might initially cause ‘stress’ or discomfort, especially in families where communication was already challenging. …it's better to be open to our parents… so that our mental health doesn't deteriorate too much or something like that, let the teacher handle it along with the parents to help us better… (Junior High Student 7) …but with parents involved. Just so my parents understand my condition because I never talk about it, even when I'm sick I never say anything. (Senior High School 1) Creating a Culture of Kindness and Fostering Kindness via Peer Support Adolescents wanted schools to become positive ‘environments’ where differences were embraced and mutual respect was the norm. To achieve this, they suggested improved school programmes to address bullying directly, ‘guidelines for witnesses’ and ‘consequences’ for those who bully others and strict enforcement of these anti‐bullying measures. Adolescents want to learn to be empathetic and caring towards their peers. They emphasised the need for education to promote understanding and compassion rather than ‘jealousy’ or ‘superiority’ and strengthen communication skills and supportive friendships. They wanted to embrace and respect differences, viewing this as a key approach to prevent bullying and create an inclusive environment. We should respect each other and know that there are limits to joking around and other things, not everything can be joked about. There's a limit to when we can joke or laugh at someone [with mental health issues]. (Junior High Student 5) Discussion This study aimed to understand Indonesian adolescents' perceptions of mental health challenges and needs. These findings showed that adolescents experienced symptoms of psychological distress, such as stress, anger outbursts and rumination tendencies. Some managed to cope with these stressors successfully by engaging in leisure activities and spiritual practices and obtaining support from friends and family. The distribution of male participants, junior high students, Sundanese ethnicity, and Islamic affiliation amongst participants closely mirrored the demographics of the study setting (West Java Province) (BPS‐Statistics of West Java Province ; Directorate General of Early Childhood Education ). Adolescents in this study experienced varied stressors and mental health concerns, from having low self‐esteem to being bullied and stressed to having suicidal thoughts. Previous research on Indonesian adolescents has similarly reported significant mental health concerns. Studies have found that 29.1% of adolescents have experienced depression, whereas the overall 12‐month prevalence of psychological distress is 7.3% amongst this population (Idris and Tuzzahra ; Marthoenis and Schouler‐Ocak ). Previous local studies reported that most Indonesian adolescents use adaptive to maladaptive coping mechanisms to cope with their mental health challenges (Batubara et al. ; Fine et al. ). To cope with stress using adaptive techniques, adolescents talk to trusted peers, avoid triggers and engage in healthy activities, such as exercise, relaxation, hobbies and prayer (Kaligis et al. ). Prayer in Islam has been found to help reduce sadness and boost life satisfaction (Aziz, Salahuddin, and Muntafi ). However, this study found that when adolescents cannot cope with life's stressors, they engage in maladaptive coping strategies, such as withdrawal, self‐harm, and suicide ideation and attempts, which Marthoenis and Schouler‐Ocak similarly reported. These behaviours may arise from prolonged social withdrawal, coupled with psychological pain and hopelessness that outweigh feelings of social connectedness. Contributing vulnerability factors include experiences of defeat or humiliation (Kirshenbaum et al. ; Zhu, Lee, and Wong ). Local studies have found that 23.1% of adolescents have engaged in self‐harm (Juliansen et al. ). In contrast, nearly 10% have experienced suicidal ideations or have developed a suicide plan (Marthoenis and Schouler‐Ocak ). The current study indicated that adolescents' inability to cope can be due to their lack of knowledge of mental health issues. The problem of lacking mental health literacy, which can lead to difficulties in coping, has been reported in previous studies conducted amongst Indonesian adolescents (Brooks, Prawira, et al. ; Juliansen et al. ). Mental health literacy programmes have been suggested to improve adolescents' mental health literacy (Juliansen et al. ). Given that this study showed that adolescents were willing to learn about mental health, organising mental health literacy programmes needs to be planned for Indonesian adolescents crucially. Previous studies have not thoroughly explored the specific needs of Indonesian adolescents regarding mental health literacy programmes (Brooks, Windfuhr, et al. ; Willenberg et al. ). This current study reported how adolescents would like mental health literacy programmes such as school‐based, face‐to‐face programmes delivered by local mental health experts, and a mixture of one‐on‐one or group sessions, depending on the topic covered. As such, local mental health experts can enhance adolescent education through culturally competent, specialised knowledge and in‐person interactions that offer stronger connections, empathetic care, and immediate support, which adolescents often prefer to online alternatives. Schools provide convenient, feasible venues for mental health literacy programmes, offering universal access and support to all adolescents regardless of risk status (Ishikawa et al. ). Therefore, future studies could involve school personnel such as teachers and healthcare providers in planning and implementing face‐to‐face mental health literacy programmes in Indonesian schools, leveraging local expertise to deliver effective interventions. This study found that adolescents experienced critical issues during their adolescent phase, including academic and peer pressure, body image concerns, romantic relationships, and a struggle for autonomy. These challenges are unique to adolescents as they mature and grow, coinciding with significant cognitive and emotional development (National Academies of Sciences et al. ). Underlying these experiences, adolescent brain development, particularly in regions such as the amygdala and cortex, is linked to the refinement of emotion regulation strategies such as cognitive reappraisal and expressive suppression, contributing to improved emotional control and cognitive abilities, ultimately resulting in the ability to achieve complex, integrated thoughts and actions (Ferschmann et al. ; Fombouchet et al. ). Given these developmental factors, mental health programmes need to address these specific issues that adolescents face to improve their emotional well‐being. This study also reported that, despite these stressors, adolescents aspire to achieve a meaningful life and career. Previous research has shown that such aspirations are common during this adolescent phase (Yamasaki et al. ). Having positive aspirations is beneficial because it can help them increase their chances of achieving success and leading meaningful and fulfilling lives (Napolitano et al. ). Mental health programmes can also incorporate career coaching to support these aspirations. Engaging career coaches to advise adolescents on post‐high school education and career choices can provide valuable guidance (Napolitano et al. ). This approach can help adolescents understand how to achieve their goals and navigate their future paths. Adolescents also reported having problems with bullying. This pervasive issue is common amongst adolescents in Indonesia and worldwide. (Biswas et al. ). The study indicated that adolescents also want to learn how to forge meaningful and supportive friendships because they feel that friends can provide valuable support. Adolescents value meaningful and close friendships. (Costello et al. ). Peer support enhances psychological well‐being by fostering connection and mutual understanding, promoting adjustment and stress management through emotional validation, and boosting motivation and self‐esteem via supportive relationships. (Costello et al. ). Hence, future mental health programmes can address bullying and promote healthy peer relationships. As suggested in this study, these programmes could be delivered with some group‐based elements to promote peer support amongst adolescents. Previous research has shown that group‐based mental health programmes enhance social and emotional well‐being, facilitating peer support and shared learning experiences, cultivating a sense of community, and promoting positive outcomes (Fassnacht et al. ). Thus, group‐based aspects can be added to adolescent mental health programmes, including forming peer support groups at school. Adolescents in this study wanted to build a strong connection with their parents as they believed their family could provide them invaluable support. Adolescents want good relationships with their parents (Branje ). Parents shape adolescents' development by adapting to their children's evolving needs (Branje ). Parents balance support and independence, guiding adolescents through conflicts and decisions (Branje ). This nuanced approach nurtures the social, emotional, and cognitive growth essential for the transition to adulthood (Branje ). Hence, adolescents suggested having their parents be involved in their mental health programmes to learn about the topic and understand them well. Previous studies have shown that parental involvement in mental health programmes can facilitate parent–child bonding and help parents learn how to support their children effectively without any stigma attached during their adolescence period (Ford et al. ; Lechuga‐Peña et al. ). Therefore, mental health programmes can incorporate parental involvement by implementing joint sessions for parents and children. Strengths and Limitations This study provided valuable insights into adolescents' experiences and perceptions of mental health challenges and needs and highlighted key points on how to improve the mental health of adolescents in Indonesia. Current findings are essential in guiding future research on developing comprehensive mental health educational and supportive programmes that address mental health challenges for adolescents in Indonesia. However, this research has certain limitations. As this study only recruited adolescents from city‐based public schools, current findings might not be representative of all adolescents in Indonesia. Moreover, collecting data only from adolescents may not provide a holistic view of the mental health challenges and needs of adolescents and future research should consider collecting and triangulating data from their parents and school educators as well. Conclusion This study explored adolescents' mental health challenges and needs. The current findings indicate that adolescents face particular challenges related to emotional regulation, body image and self‐esteem, academic pressure, and the influence of social media. In response to these adversities, adolescents employed a range of coping mechanisms, both adaptive and maladaptive. Additionally, they expressed a strong interest in developing strategies to manage these challenges and achieve their future career goals. The findings highlight that while adolescents face psychosocial distress, they showed interest in mental health education, favouring school‐based interventions by local experts. Findings emphasised the need for comprehensive school programs to improve mental health literacy, teach coping skills, address body image, promote autonomy, offer career guidance, and foster healthy relationships. Although parents, teachers, and healthcare professionals were not participants in this study, adolescents frequently cited them as important sources of support. Hence, the development of future mental health programmes would require collaborative efforts to be undertaken by parents, teachers, healthcare professionals, and mental health experts. Such programmes could enhance adolescents' mental health and well‐being, creating supportive environments at school and home. Relevance for Clinical Practice Findings suggest a need for comprehensive school‐based mental health literacy programs in Indonesia involving local experts, school staff, and health providers. These should address academic and peer pressure, body image, relationships, and autonomy struggles. Incorporating career coaching can help adolescents navigate their future paths. Furthermore, these initiatives should address bullying and promote healthy peer relationships through peer support groups. It is recommended that adolescents involve their parents in joint sessions to enhance understanding and engagement. Mental health programmes could significantly improve adolescents' well‐being by addressing identified challenges and capitalising on their interests. Future studies should include diverse adolescents from urban and rural areas attending private and public schools to ensure more representative findings and inform the development of effective, culturally appropriate interventions. Future research could explore digital mental health education options for Indonesian adolescents, asking them specifically what would be helpful and how to tailor these programmes to their evolving needs. Made substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data; D.I.Y., J.Y.X.C., S.S. Involved in drafting the manuscript or revising it critically for important intellectual content; D.I.Y., J.Y.X.C., S.S. Given final approval of the version to be published. Each author should have participated sufficiently in the work to take public responsibility for appropriate portions of the content; D.I.Y., J.Y.X.C., J.C.M.W., M.P., Y.S.S.G., S.S. Agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. D.I.Y., J.Y.X.C., J.C.M.W., M.P., Y.S.S.G., S.S. The authors declare no conflicts of interest. Appendix S1. Table S1. |
Primary Immunodeficiencies in Russia: Data From the National Registry | 0b7782f2-2b27-4353-90c5-3469ee3fd3e8 | 7424007 | Pathology[mh] | Primary immunodeficiencies (PID)—also referred to as “inborn errors of immunity”—are rare disorders characterized by susceptibility to infection and a preponderance of autoimmunity, allergy, autoinflammation, and malignancies. According to the latest update of the International Union of Immunological Societies Experts Committee (IUIS) classification, germline mutations in 430 genes cause 404 distinct phenotypes of immunological diseases, divided into 10 groups according to the type of immunological defect. Wide introduction of the molecular genetic techniques, including next-generation sequencing (NGS) , has led to the description of novel PID genes. This allows for a more precise assessment of clinical prognosis and for the choice of targeted therapy—or even gene therapy—as well as for family counseling . Generally, PID are described as rare diseases. Yet their reported prevalence varies greatly in different countries, depending on many factors: from data collection methodology to objective epidemiological features. In European countries, the estimated prevalence of PID ranges from 2.7/100,000 in Germany, to 4.16-5.9/100,000 in Switzerland and the United Kingdom (UK), to 8/100,000 in France . These numbers are in the range of the "orphan diseases" category. Yet recent findings, in patients with mendelian susceptibility to mycobacterial diseases (MSMD) , suggest that the actual prevalence is much higher. National PID registries , along with registries combining data for geographical regions , have proven to be an important tool for assessing the clinical and epidemiological features of PID—as well as an instrument for facilitating PID collaboration and research, both within and between countries. Several PID cohort study reports from Russia have been published recently, yet little has been known about the overall epidemiological features of PID in the heterogeneous Russian population. The aim of this study is to describe PID epidemiology in Russia, using a national registry. Registry Structure The Russian PID registry was established in 2017, as an initiative of the National Association of Experts in PID (NAEPID)—a non-profit organization facilitating collaboration amongst leading specialists in the field of primary immunodeficiencies in Russia. The registry is a secure on-line database, developed, and designed with the aim of collecting epidemiological, clinical, and genetic data of PID in Russia. It includes demographic data, clinical and laboratory details, molecular diagnosis, and treatment aspects of PID patients of all ages. Regular information updates allow for the collection of prospective data. The data is entered via an online registry form only; no paper-based documentation is needed. A group of trained managers at federal centers and doctors at regional hospitals enter the data in the database. This article analyzes the data input into the registry from its inception until February 1, 2020. At the time of the data analysis, PID variants were grouped according to the IUIS 2015–2017 classification and did not include the newly added category of bone marrow failure . The database structure includes the following obligatory fields: demographic data, family history, diagnosis, genetic testing results, and ages of disease onset and diagnosis. The extended universal fields—including detailed clinical description and treatment data—are not mandatory at the time of the first registry of a patient, but are eventually requested. New entries are reviewed automatically, and no duplicate entries can be created. Human-factor errors are prevented by built-in quality assurance measures. Patients can only be registered if the documenting center is part of the registry's collaborative team. Written informed consent is given by all registered patients or their legal guardians. Regularly updated reports on PID epidemiological data are published on the NAEPID Registry website http://naepid-reg.ru . Registry Platform The software platform used in the study was developed by Rosmed.info, using the PHP programming language. For database management, the Maria DB relational system (offshoot of the MySQL system) was utilized. Server Version: the 10.1.40-Maria DB Server and replication mechanism were used for back-up and improved performance; the server's contour and physical protection were compliant with Russian law regarding personal information protection. Centers Russia is divided into 85 regions, which are grouped geographically into eight federal districts. Data on the PID patients residing in 83 of the federal regions has been accumulated in the registry, with the input of regional and tertiary centers. No patients residing in the other two regions (Chukotka and Tuva) were registered in the database. At the time of analysis, 69 regional medical centers and 5 university clinics—located in all 8 federal districts—have contributed to the collaborative work. Three tertiary immunology centers located in Moscow serve as the main reference centers. The diagnosis of the majority of the patients (2,488/2,728, 91%) has been confirmed in at least one of the tertiary centers. Patients PID diagnosis was made according to the ESID diagnostic criteria . Patients with secondary immune defects were excluded. Although the registry collects data on all PID, 233 patients with selective IgA deficiency, and 106 patients with PFAPA (periodic fever, aphthous stomatitis, pharyngitis, adenitis) were not included in the current analysis. The entire cohort of patients (2,728) was included in the epidemiological analysis—while, for the treatment description, we used only the updated information available for the 1,851 alive patients. Genetic testing has been performed using the main molecular techniques, including Sanger sequencing, targeted next-generation sequencing (NGS), whole-exome and whole-genome sequencing, fluorescent in situ hybridization (FISH), multiplex ligation-dependent probe amplification (MLPA), and chromosomal microarray analysis (CMA), according to standard protocols. Data Verification All data entered into the registry undergoes automatic verification for typing errors and is regularly checked by the database monitor for consistency and completeness. Terminology and Definitions The actual age distribution was calculated only for the patients with updated information; the age of each patient was determined as the difference between their date of birth and the date of the last update. Patients without any contact within the last 2 years were marked as “lost to follow-up.” The diagnostic delay was estimated for all registered patients, in the nine most common PID categories, as the difference between the date of disease onset and the date of clinical diagnosis of PID. Prevalence was estimated as the number of all registered PID cases, divided by the population of Russia or of each federal district; information was obtained from open resources . Incidence was estimated as the number of new PID cases diagnosed during each year, divided by the number of live births during that year in Russia; information was obtained from open resources. Prevalence and incidence were expressed as the number of cases per 100,000 people. Mortality rate, expressed in percentage, was estimated as the number of deceased patients divided by the number of all updated PID cases; lost-to-follow-up patients were excluded. The category of “fully recovered” was not available at the time of analysis. Patients from birth to 17 years, 11 months, and 29 days were counted as children. The rest were considered adults. Statistical Analysis Demographic and epidemiological characteristics were described as average for the categorical variables, and median and range for the quantitative variables. To compare the prevalence of the diseases, the chi-squared test was used and a p -value of <0.05 was considered statistically significant. The average immunoglobulin (IG) dose was expressed as mean ± standard deviation. Statistical analysis was performed using XLSTAT Software (Addinsoft). The Russian PID registry was established in 2017, as an initiative of the National Association of Experts in PID (NAEPID)—a non-profit organization facilitating collaboration amongst leading specialists in the field of primary immunodeficiencies in Russia. The registry is a secure on-line database, developed, and designed with the aim of collecting epidemiological, clinical, and genetic data of PID in Russia. It includes demographic data, clinical and laboratory details, molecular diagnosis, and treatment aspects of PID patients of all ages. Regular information updates allow for the collection of prospective data. The data is entered via an online registry form only; no paper-based documentation is needed. A group of trained managers at federal centers and doctors at regional hospitals enter the data in the database. This article analyzes the data input into the registry from its inception until February 1, 2020. At the time of the data analysis, PID variants were grouped according to the IUIS 2015–2017 classification and did not include the newly added category of bone marrow failure . The database structure includes the following obligatory fields: demographic data, family history, diagnosis, genetic testing results, and ages of disease onset and diagnosis. The extended universal fields—including detailed clinical description and treatment data—are not mandatory at the time of the first registry of a patient, but are eventually requested. New entries are reviewed automatically, and no duplicate entries can be created. Human-factor errors are prevented by built-in quality assurance measures. Patients can only be registered if the documenting center is part of the registry's collaborative team. Written informed consent is given by all registered patients or their legal guardians. Regularly updated reports on PID epidemiological data are published on the NAEPID Registry website http://naepid-reg.ru . Registry Platform The software platform used in the study was developed by Rosmed.info, using the PHP programming language. For database management, the Maria DB relational system (offshoot of the MySQL system) was utilized. Server Version: the 10.1.40-Maria DB Server and replication mechanism were used for back-up and improved performance; the server's contour and physical protection were compliant with Russian law regarding personal information protection. The software platform used in the study was developed by Rosmed.info, using the PHP programming language. For database management, the Maria DB relational system (offshoot of the MySQL system) was utilized. Server Version: the 10.1.40-Maria DB Server and replication mechanism were used for back-up and improved performance; the server's contour and physical protection were compliant with Russian law regarding personal information protection. Russia is divided into 85 regions, which are grouped geographically into eight federal districts. Data on the PID patients residing in 83 of the federal regions has been accumulated in the registry, with the input of regional and tertiary centers. No patients residing in the other two regions (Chukotka and Tuva) were registered in the database. At the time of analysis, 69 regional medical centers and 5 university clinics—located in all 8 federal districts—have contributed to the collaborative work. Three tertiary immunology centers located in Moscow serve as the main reference centers. The diagnosis of the majority of the patients (2,488/2,728, 91%) has been confirmed in at least one of the tertiary centers. PID diagnosis was made according to the ESID diagnostic criteria . Patients with secondary immune defects were excluded. Although the registry collects data on all PID, 233 patients with selective IgA deficiency, and 106 patients with PFAPA (periodic fever, aphthous stomatitis, pharyngitis, adenitis) were not included in the current analysis. The entire cohort of patients (2,728) was included in the epidemiological analysis—while, for the treatment description, we used only the updated information available for the 1,851 alive patients. Genetic testing has been performed using the main molecular techniques, including Sanger sequencing, targeted next-generation sequencing (NGS), whole-exome and whole-genome sequencing, fluorescent in situ hybridization (FISH), multiplex ligation-dependent probe amplification (MLPA), and chromosomal microarray analysis (CMA), according to standard protocols. All data entered into the registry undergoes automatic verification for typing errors and is regularly checked by the database monitor for consistency and completeness. The actual age distribution was calculated only for the patients with updated information; the age of each patient was determined as the difference between their date of birth and the date of the last update. Patients without any contact within the last 2 years were marked as “lost to follow-up.” The diagnostic delay was estimated for all registered patients, in the nine most common PID categories, as the difference between the date of disease onset and the date of clinical diagnosis of PID. Prevalence was estimated as the number of all registered PID cases, divided by the population of Russia or of each federal district; information was obtained from open resources . Incidence was estimated as the number of new PID cases diagnosed during each year, divided by the number of live births during that year in Russia; information was obtained from open resources. Prevalence and incidence were expressed as the number of cases per 100,000 people. Mortality rate, expressed in percentage, was estimated as the number of deceased patients divided by the number of all updated PID cases; lost-to-follow-up patients were excluded. The category of “fully recovered” was not available at the time of analysis. Patients from birth to 17 years, 11 months, and 29 days were counted as children. The rest were considered adults. Demographic and epidemiological characteristics were described as average for the categorical variables, and median and range for the quantitative variables. To compare the prevalence of the diseases, the chi-squared test was used and a p -value of <0.05 was considered statistically significant. The average immunoglobulin (IG) dose was expressed as mean ± standard deviation. Statistical analysis was performed using XLSTAT Software (Addinsoft). Demographics and PID Distribution Information on 2,728 PID patients was available for analysis. Of these patients, 1,851 (68%) were marked as alive and 200 (7%) as dead. The remaining 677 (25%) were not updated during the last year or were lost to follow-up. The male-to-female ratio was 1.5:1, with 1,657 male patients (60%) and 1,071 female (40%). Of the 1,851 living patients, 1,426 (77%) were children, and 425 (23%) were adults. The majority of the children (913 of 1,426, 64%) were under 10 years old. The male-to-female ratio varied from 2:1 in children, to 1:1 in the group of adults under the age of 30 and 0.4:1 in the older patients . PID was diagnosed before the age of 18 years (in childhood) in 2,192 patients (88%), predominantly in the first 5 years of life (1,356, 54%; ). The distribution of patients among the main PID groups varied greatly between children and adults. All forms of PID were observed in children and in young adults (under the age of 25 years). Yet the majority of older patients belonged to just two categories—common variable immunodeficiency (CVID) and hereditary angioedema (HAE). Overall, primary antibody deficiencies (PAD; 699; 26%) and syndromic PID (591; 22%) were the most common disorders in Russia. These were followed by five PID groups, in similar proportions: complement deficiencies (342; 12%), phagocytic defects (262; 10%), combined T and B cell defects (368; 13%), autoinflammatory disorders (221; 8%), and immune dysregulation (196; 7%; ). Somatic phenocopies (6; <1%) and defects of innate immunity (43; 1.5%) were very rare. The most frequent PID categories in Russia, which cumulatively accounted for 53% of all registered patients, were: HAE type 1 and 2 ( n = 341), CVID ( n = 317), Wiskott–Aldrich syndrome (WAS; n = 154), X-linked agammaglobulinemia (XLA; n = 155), Chronic granulomatous disease (CGD; n = 135; of them 92 patients with X-linked CGD (X-CGD), Severe combined immunodeficiency (SCID; n = 137; of them 47 patients with X-linked SCID (X-SCID), DiGeorge syndrome (DGS; n = 130), Ataxia-telangiectasia (AT; n = 127) and Nijmegen breakage syndrome (NBS; n = 88; ). To assess mortality, we analyzed the cohort of 2,051 patients whose status was known (including 1,851 alive and 200 deceased patients). The overall mortality rate was estimated at 9.7%. The precise date of death was known for 136 of the 200 deceased patients: 127 (93%) children and 9 (7%) adults . The mortality rate ranged from 2 to 42% in different age groups; the highest rate was found in children in their first 2 years of life . The majority of infant deaths occurred in SCID patients (39 of 48, 81%; ). In the next age group (2–5 years), mortality was highest in the following four PID groups, in almost equal proportions: T and B cell defects (12/38, 32%) and syndromic PID (11/38, 29%), followed by phagocytic defects (7/38, 18%), and immune dysregulation (7/38, 18%). In total, 63% (86/136) of all PID-related deaths occurred in patients within the first 5 years of life. In older children , mortality was associated predominantly with syndromic PID (55%), immune dysregulation (9%), and PAD (13%)—whereas, in adults, it was associated only with PAD (78%) and HAE (22%; ). Diagnostic Delay Substantial PID diagnostic delay has been noted in Russia—with a median of 2 years for the whole group, but over a broad age range (0–68 years). No difference in diagnostic delay was observed, between patients diagnosed during the last 5 years ( M = 2; 0–63, 997 patients) and before 2015 ( M = 2 years; 0–68, 1,400 patients). Among the most common PID, the shortest diagnostic delay was observed in SCID ( M = 4 months, 0–68), followed by the WAS ( M = 8 months, 0–144), DGS ( M = 10 months, 0–144), and CGD ( M = 1, 0–17 years; ). In X-linked agammaglobulinemia (XLA) patients, time to diagnosis varied greatly—from 0 to 141 months, with a median of 28 months. The DNA repair disorders NBS and AT were diagnosed with a median of 2.5 years (0–23) and 3.0 years (0–14), respectively . The longest diagnostic delay was observed in CVID ( M = 6 years, 0–52) and HAE ( M = 11 years, 0–68; ). Just a few PID patients were diagnosed before the clinical onset of the disease, due to their family history; genetic testing was carried out for each of them. These included seven children with mutations in SERPING1 , two with BTK , one with WAS , and one with JAK3 defects. Genetic diagnosis led to an early start on IVIG therapy in the XLA patients, and to successful HSCT in the WAS and SCID patients. Family History The registry contained 310/2,728 (11%) familial PID cases, originating from 150 families , with the most frequent familial PIDs being HAE, WAS, and XLA. Consanguinity, as reported by the parents, was documented in 45 families. A family history of at least one death suspected to be due to PID was documented for 275 patients. These included infection-related deaths, in 185 cases, and malignancy-related deaths in 49 cases. Epidemiology The minimum overall PID prevalence in the Russian population was estimated at 1.3:100,000 people, with drastic variations among the federal districts (from 0.9 to 2.8 per 100,000; ). The average annual PID incidence was estimated to be 5.7 ± 0.6 in 100,000 live births. This ranged from 4.4 to 7.1:100,000, over the period from 2000 to 2019. During this period, the average number of newly diagnosed PID cases per year increased from 201 to 331 . Prevalence was estimated only for those PIDs frequently found in the adult group and with a low number of deaths registered in the database—CVID and HAE, with 0.22 and 0.23 per 100,000 people, respectively. This represents population frequency rates of 1 case per 430,000–450,000 people. Genetic Defects Genetic testing has been performed for 1,740 patients, with genetic defects confirmed in 1,344 (77%). PID diagnosis has been genetically confirmed in 86% of the children, yet in only 12% of the adults. Disease-causing genetic defects were detected by the following genetic methods: by direct Sanger sequencing in 903 patients (67%) and by next-generation sequencing (NGS) methods in 323 (24%) patients [including targeted panels, in 278; whole exome sequencing (WES), in 30; Clinical exome, in 13; and whole genome sequencing (WGS), in 2]. In the remaining 118 (9%) patients, cytogenetic methods and MLPA were used. Deletion of 22q.11 was confirmed via the FISH method in 80 patients, and by CMA in 26. In 6 cases, various chromosomal abnormalities resulting in syndromic forms of PID were confirmed by CMA. Mutations were found in 98 PID genes and in three genes that are not currently included in the PID classification ( NTRK1, SCN9A, XRCC4 ) . As expected, the highest number of genetic defects were found in genes underlying the most frequent “classical” PID: mutations in SERPING1 were found in 178 of 341 HAE cases (52.2%), WAS in 154 (100%) of WAS patients , BTK in 114 of 155 X-LA (73.5%) , CYBB in 98 (73%) of CGD 135 cases , NBN in 75/88 (85%) of NBS patients and ATM in 55/127 (43%) of AT patients. 106/130 DGS patients had del22q.11 confirmed. At least 20 patients (for each disease) had mutations in the following genes: MEFV, MVK, NLRP3, ELANE, SBDS, FAS, STAT3 LOF, IL2RG , and CD40LG . Rare defects, with 4–20 patients for each gene, affected predominantly recently described genes: PSTPIP1, TNFRSF1A, CXCR4, STAT1, CYBA, STXBP2, FOXP3, CTLA4, AIRE, XIAP, SH2D1A, SMARCAL1, RMRP, SPINK5, KMT2D, NFKB1, PIK3CD, PIK3R1, TNFRSF13B, RAG1, RAG2, ADA, ARTEMIS, JAK3, LIG4 , and KRAS . The remaining 57 genes had mutations recorded for single patients . The proportion of patients with genetically confirmed diagnoses was highest among those with syndromic PIDs, reaching 77% (457/591) . Within the phagocytic defect and innate immunity defect groups, 71% (185/262) and 63% (27/43) of the patients, respectively, had a genetic diagnosis. PID genetic confirmation showed about half of all patients in the groups to have immune dysregulation (56%; 109/196), autoinflammatory disorders (49%; 109/221), and complement deficiencies (52%; 179/342)—the last of these due mainly to HAE. The proportion of patients with genetic diagnoses showing T- and B-cell defects was 33% (123/368). The lowest number of patients with verified mutations, at 21% (144/699), was observed in the PAD group ; BTK abnormalities prevailed among them (114/155; 73.5%). Somatic mutations in KRAS and NRAS were confirmed in six patients. The segregation of genetic defects by mode of inheritance was nearly equal: 469 patients (38.4%) with an X-linked (XL) diseases had mutations in 10 genes, 383 (31.4%) patients with autosomal dominant (AD) diseases had mutations in 29 genes, and 369 (30.2%) patients with autosomal recessive (AR) diseases had mutations in 58 genes. In the group of AR PID patients, 218 (59%) had compound heterozygous mutations and 151 (41%) had homozygous mutations; the majority (74; 49%), as expected, were NBS patients with the “Slavic” mutation in the NBN gene . Homozygous mutations were also found in the genes with the known “hot-spots”: MEFV (11; 7%) and AIRE (5; 3%). Another “Slavic” mutation— RAG1 c.256_257delAA p.K86fs, in a compound heterozygous or homozygous state—was reported in 7/16 patients with RAG1 defects, putting this allele frequency at 25%. Testing for prenatal PID diagnosis (PND) was performed in 40 pregnancies among 37 families with previously known PID-causing genetic defects. Embryonic/fetal material was obtained by chorionic villi sampling at 10–12 weeks of gestation in 37 cases; by amniocentesis in the second trimester, in two cases; and by cordocentesis, in one. No serious complications were noted, during or after the procedures. 30/40 embryos were mutation-free. In six cases, a PID diagnosis was given; all families chose to terminate the pregnancies. Four embryos were heterozygous carriers of recessive PID mutations—all these pregnancies were carried to term. Two more sibling heterozygous carriers were born after preimplantation diagnosis. Symptomatic Treatment Treatment of PID symptoms, as documented in the registry, has been divided into three categories: immunoglobulin (IG) substitution, biologicals, and “other.” There was updated information for 1,622 patients, regarding prescribed or on-going therapy. Half of the patients (843/1,622, 52%) received IG substitution. Of these, only 32 patients (4%) have ever had an experience with subcutaneous IG (SCIG); all others received intravenous IG (IVIG), with an average dose of 0.46 ± 0.09 g/kg per month. Regular IG substitution therapy was recorded in 279/369 patients (76%) with syndromic PID, in 296/433 (68%) PAD patients and in 173/270 patients (64%) with combined PID. At least single (but not regular) IG use was recorded for 15/29 patients (52%) with defects of innate immunity, 61/124 patients (49%) with immune dysregulation, 49/172 patients (28%) with phagocytosis defects, and 25/171 patients (15%) with autoinflammatory disorders. 414/1,622 (25%) patients were treated with various biological drugs. Updated information was available for 91 HAE patients, of whom 70/91 (77%) received either a C1 inhibitor or a selective antagonist of bradykinin receptors during attacks, including 51 patients who had experience with both drugs. In other PIDs, the rate of biological treatment was highest in the group of patients with autoinflammatory disorders: 86/186 (46%). This was followed by the group of immune dysregulation, with 48/134 (36%); and of combined PID, with and without syndromic features: 63/405 (16%) and 27/242 (18%), respectively. Patients with disorders of innate immunity and PAD were treated with biologicals only, in 3/32 (9%) and in 43/453 (6%) cases, respectively. Curative Therapies Three patients in the cohort underwent gene therapy for WAS; all are currently alive. Information was available for 342/2,728 (16%) patients who underwent HSCT. Of these, 60 were deceased, 228 alive and 54 had not been updated during the prior 2 years . All transplanted patients were diagnosed with PID as children. Yet, in 5/342, HSCT was performed after 18 years of age. HSCT has been performed in 106/591 (18%) patients with PIDs with syndromic features (18% of all syndromic PIDs), including 92/106 (88%) with WAS and 25/88(28%) with NBS; in 111 patients with combined T- and B-cell defects (30% of all CID), including 79/137 SCID (58%); in 66/262(25%) patients with phagocytic defects, including 47/135 CGD (35%) and 14/107 SCN (13%); in 41/196 (21%) patients with immune dysregulation; in 5/699 (0.7%) patients with PAD [four with activated PI3K syndrome (APDS) and 1 with XLA]; in 6/221(3%) patients with autoinflammatory disorders; and in 7/43 (16%) patients with defects of innate immunity. Information on 2,728 PID patients was available for analysis. Of these patients, 1,851 (68%) were marked as alive and 200 (7%) as dead. The remaining 677 (25%) were not updated during the last year or were lost to follow-up. The male-to-female ratio was 1.5:1, with 1,657 male patients (60%) and 1,071 female (40%). Of the 1,851 living patients, 1,426 (77%) were children, and 425 (23%) were adults. The majority of the children (913 of 1,426, 64%) were under 10 years old. The male-to-female ratio varied from 2:1 in children, to 1:1 in the group of adults under the age of 30 and 0.4:1 in the older patients . PID was diagnosed before the age of 18 years (in childhood) in 2,192 patients (88%), predominantly in the first 5 years of life (1,356, 54%; ). The distribution of patients among the main PID groups varied greatly between children and adults. All forms of PID were observed in children and in young adults (under the age of 25 years). Yet the majority of older patients belonged to just two categories—common variable immunodeficiency (CVID) and hereditary angioedema (HAE). Overall, primary antibody deficiencies (PAD; 699; 26%) and syndromic PID (591; 22%) were the most common disorders in Russia. These were followed by five PID groups, in similar proportions: complement deficiencies (342; 12%), phagocytic defects (262; 10%), combined T and B cell defects (368; 13%), autoinflammatory disorders (221; 8%), and immune dysregulation (196; 7%; ). Somatic phenocopies (6; <1%) and defects of innate immunity (43; 1.5%) were very rare. The most frequent PID categories in Russia, which cumulatively accounted for 53% of all registered patients, were: HAE type 1 and 2 ( n = 341), CVID ( n = 317), Wiskott–Aldrich syndrome (WAS; n = 154), X-linked agammaglobulinemia (XLA; n = 155), Chronic granulomatous disease (CGD; n = 135; of them 92 patients with X-linked CGD (X-CGD), Severe combined immunodeficiency (SCID; n = 137; of them 47 patients with X-linked SCID (X-SCID), DiGeorge syndrome (DGS; n = 130), Ataxia-telangiectasia (AT; n = 127) and Nijmegen breakage syndrome (NBS; n = 88; ). To assess mortality, we analyzed the cohort of 2,051 patients whose status was known (including 1,851 alive and 200 deceased patients). The overall mortality rate was estimated at 9.7%. The precise date of death was known for 136 of the 200 deceased patients: 127 (93%) children and 9 (7%) adults . The mortality rate ranged from 2 to 42% in different age groups; the highest rate was found in children in their first 2 years of life . The majority of infant deaths occurred in SCID patients (39 of 48, 81%; ). In the next age group (2–5 years), mortality was highest in the following four PID groups, in almost equal proportions: T and B cell defects (12/38, 32%) and syndromic PID (11/38, 29%), followed by phagocytic defects (7/38, 18%), and immune dysregulation (7/38, 18%). In total, 63% (86/136) of all PID-related deaths occurred in patients within the first 5 years of life. In older children , mortality was associated predominantly with syndromic PID (55%), immune dysregulation (9%), and PAD (13%)—whereas, in adults, it was associated only with PAD (78%) and HAE (22%; ). Substantial PID diagnostic delay has been noted in Russia—with a median of 2 years for the whole group, but over a broad age range (0–68 years). No difference in diagnostic delay was observed, between patients diagnosed during the last 5 years ( M = 2; 0–63, 997 patients) and before 2015 ( M = 2 years; 0–68, 1,400 patients). Among the most common PID, the shortest diagnostic delay was observed in SCID ( M = 4 months, 0–68), followed by the WAS ( M = 8 months, 0–144), DGS ( M = 10 months, 0–144), and CGD ( M = 1, 0–17 years; ). In X-linked agammaglobulinemia (XLA) patients, time to diagnosis varied greatly—from 0 to 141 months, with a median of 28 months. The DNA repair disorders NBS and AT were diagnosed with a median of 2.5 years (0–23) and 3.0 years (0–14), respectively . The longest diagnostic delay was observed in CVID ( M = 6 years, 0–52) and HAE ( M = 11 years, 0–68; ). Just a few PID patients were diagnosed before the clinical onset of the disease, due to their family history; genetic testing was carried out for each of them. These included seven children with mutations in SERPING1 , two with BTK , one with WAS , and one with JAK3 defects. Genetic diagnosis led to an early start on IVIG therapy in the XLA patients, and to successful HSCT in the WAS and SCID patients. The registry contained 310/2,728 (11%) familial PID cases, originating from 150 families , with the most frequent familial PIDs being HAE, WAS, and XLA. Consanguinity, as reported by the parents, was documented in 45 families. A family history of at least one death suspected to be due to PID was documented for 275 patients. These included infection-related deaths, in 185 cases, and malignancy-related deaths in 49 cases. The minimum overall PID prevalence in the Russian population was estimated at 1.3:100,000 people, with drastic variations among the federal districts (from 0.9 to 2.8 per 100,000; ). The average annual PID incidence was estimated to be 5.7 ± 0.6 in 100,000 live births. This ranged from 4.4 to 7.1:100,000, over the period from 2000 to 2019. During this period, the average number of newly diagnosed PID cases per year increased from 201 to 331 . Prevalence was estimated only for those PIDs frequently found in the adult group and with a low number of deaths registered in the database—CVID and HAE, with 0.22 and 0.23 per 100,000 people, respectively. This represents population frequency rates of 1 case per 430,000–450,000 people. Genetic testing has been performed for 1,740 patients, with genetic defects confirmed in 1,344 (77%). PID diagnosis has been genetically confirmed in 86% of the children, yet in only 12% of the adults. Disease-causing genetic defects were detected by the following genetic methods: by direct Sanger sequencing in 903 patients (67%) and by next-generation sequencing (NGS) methods in 323 (24%) patients [including targeted panels, in 278; whole exome sequencing (WES), in 30; Clinical exome, in 13; and whole genome sequencing (WGS), in 2]. In the remaining 118 (9%) patients, cytogenetic methods and MLPA were used. Deletion of 22q.11 was confirmed via the FISH method in 80 patients, and by CMA in 26. In 6 cases, various chromosomal abnormalities resulting in syndromic forms of PID were confirmed by CMA. Mutations were found in 98 PID genes and in three genes that are not currently included in the PID classification ( NTRK1, SCN9A, XRCC4 ) . As expected, the highest number of genetic defects were found in genes underlying the most frequent “classical” PID: mutations in SERPING1 were found in 178 of 341 HAE cases (52.2%), WAS in 154 (100%) of WAS patients , BTK in 114 of 155 X-LA (73.5%) , CYBB in 98 (73%) of CGD 135 cases , NBN in 75/88 (85%) of NBS patients and ATM in 55/127 (43%) of AT patients. 106/130 DGS patients had del22q.11 confirmed. At least 20 patients (for each disease) had mutations in the following genes: MEFV, MVK, NLRP3, ELANE, SBDS, FAS, STAT3 LOF, IL2RG , and CD40LG . Rare defects, with 4–20 patients for each gene, affected predominantly recently described genes: PSTPIP1, TNFRSF1A, CXCR4, STAT1, CYBA, STXBP2, FOXP3, CTLA4, AIRE, XIAP, SH2D1A, SMARCAL1, RMRP, SPINK5, KMT2D, NFKB1, PIK3CD, PIK3R1, TNFRSF13B, RAG1, RAG2, ADA, ARTEMIS, JAK3, LIG4 , and KRAS . The remaining 57 genes had mutations recorded for single patients . The proportion of patients with genetically confirmed diagnoses was highest among those with syndromic PIDs, reaching 77% (457/591) . Within the phagocytic defect and innate immunity defect groups, 71% (185/262) and 63% (27/43) of the patients, respectively, had a genetic diagnosis. PID genetic confirmation showed about half of all patients in the groups to have immune dysregulation (56%; 109/196), autoinflammatory disorders (49%; 109/221), and complement deficiencies (52%; 179/342)—the last of these due mainly to HAE. The proportion of patients with genetic diagnoses showing T- and B-cell defects was 33% (123/368). The lowest number of patients with verified mutations, at 21% (144/699), was observed in the PAD group ; BTK abnormalities prevailed among them (114/155; 73.5%). Somatic mutations in KRAS and NRAS were confirmed in six patients. The segregation of genetic defects by mode of inheritance was nearly equal: 469 patients (38.4%) with an X-linked (XL) diseases had mutations in 10 genes, 383 (31.4%) patients with autosomal dominant (AD) diseases had mutations in 29 genes, and 369 (30.2%) patients with autosomal recessive (AR) diseases had mutations in 58 genes. In the group of AR PID patients, 218 (59%) had compound heterozygous mutations and 151 (41%) had homozygous mutations; the majority (74; 49%), as expected, were NBS patients with the “Slavic” mutation in the NBN gene . Homozygous mutations were also found in the genes with the known “hot-spots”: MEFV (11; 7%) and AIRE (5; 3%). Another “Slavic” mutation— RAG1 c.256_257delAA p.K86fs, in a compound heterozygous or homozygous state—was reported in 7/16 patients with RAG1 defects, putting this allele frequency at 25%. Testing for prenatal PID diagnosis (PND) was performed in 40 pregnancies among 37 families with previously known PID-causing genetic defects. Embryonic/fetal material was obtained by chorionic villi sampling at 10–12 weeks of gestation in 37 cases; by amniocentesis in the second trimester, in two cases; and by cordocentesis, in one. No serious complications were noted, during or after the procedures. 30/40 embryos were mutation-free. In six cases, a PID diagnosis was given; all families chose to terminate the pregnancies. Four embryos were heterozygous carriers of recessive PID mutations—all these pregnancies were carried to term. Two more sibling heterozygous carriers were born after preimplantation diagnosis. Treatment of PID symptoms, as documented in the registry, has been divided into three categories: immunoglobulin (IG) substitution, biologicals, and “other.” There was updated information for 1,622 patients, regarding prescribed or on-going therapy. Half of the patients (843/1,622, 52%) received IG substitution. Of these, only 32 patients (4%) have ever had an experience with subcutaneous IG (SCIG); all others received intravenous IG (IVIG), with an average dose of 0.46 ± 0.09 g/kg per month. Regular IG substitution therapy was recorded in 279/369 patients (76%) with syndromic PID, in 296/433 (68%) PAD patients and in 173/270 patients (64%) with combined PID. At least single (but not regular) IG use was recorded for 15/29 patients (52%) with defects of innate immunity, 61/124 patients (49%) with immune dysregulation, 49/172 patients (28%) with phagocytosis defects, and 25/171 patients (15%) with autoinflammatory disorders. 414/1,622 (25%) patients were treated with various biological drugs. Updated information was available for 91 HAE patients, of whom 70/91 (77%) received either a C1 inhibitor or a selective antagonist of bradykinin receptors during attacks, including 51 patients who had experience with both drugs. In other PIDs, the rate of biological treatment was highest in the group of patients with autoinflammatory disorders: 86/186 (46%). This was followed by the group of immune dysregulation, with 48/134 (36%); and of combined PID, with and without syndromic features: 63/405 (16%) and 27/242 (18%), respectively. Patients with disorders of innate immunity and PAD were treated with biologicals only, in 3/32 (9%) and in 43/453 (6%) cases, respectively. Three patients in the cohort underwent gene therapy for WAS; all are currently alive. Information was available for 342/2,728 (16%) patients who underwent HSCT. Of these, 60 were deceased, 228 alive and 54 had not been updated during the prior 2 years . All transplanted patients were diagnosed with PID as children. Yet, in 5/342, HSCT was performed after 18 years of age. HSCT has been performed in 106/591 (18%) patients with PIDs with syndromic features (18% of all syndromic PIDs), including 92/106 (88%) with WAS and 25/88(28%) with NBS; in 111 patients with combined T- and B-cell defects (30% of all CID), including 79/137 SCID (58%); in 66/262(25%) patients with phagocytic defects, including 47/135 CGD (35%) and 14/107 SCN (13%); in 41/196 (21%) patients with immune dysregulation; in 5/699 (0.7%) patients with PAD [four with activated PI3K syndrome (APDS) and 1 with XLA]; in 6/221(3%) patients with autoinflammatory disorders; and in 7/43 (16%) patients with defects of innate immunity. The current study represents the first attempt to systematically assess clinical and epidemiological data on patients with PID in Russia, using the online registry. At the time of analysis, 2,728 PID patients were registered, representing all districts of the country—thus making this study a valid assessment of the PID cohort in Russia. Since reporting patients in the registry was not mandatory for the treating physicians, we expect underreporting of about 15–20% and are therefore able to discuss only the minimum epidemiological characteristics of PID in Russia. Though PID prevalence in the central part of Russia (2.8 per 100,000 people) is comparable to that of most European countries (2–8 per 100,000 people) , the overall prevalence of 1.3 per 100,000 is quite low. This reflects significant under-diagnosis, especially in regions with low population density and economic status. The male-to-female ratio in our various age groups does not differ greatly from previous observations, with males predominating amongst children and females in adulthood . Our study demonstrates a high mortality rate in the Russian PID cohort—as high as 9.8%—as compared with the most recently published German and Swiss registry. Yet it is comparable to the 8.6% (641/7,430) in the previously published ESID registry and the 8% (2,232/27,550) provided by the online ESID reporting website . Significantly, half of reported PID deaths occur within the first 5 years of life. This stresses the importance of early PID diagnosis and quick referral to transplanting centers, as SCID and other CIDs account for the majority of early PID deaths. In light of these statistics, unrecognized infant PID mortality may significantly contribute to the low prevalence of PID in Russia, as patients die before they are diagnosed with PID. Thus, future introduction of neonatal PID screening utilizing TREC/KREC detection may substantially improve PID verification . Children represent the majority (77%) of PID patients in the registry. Comparing this data to other registries—where patients over 18 years old represent up to 55% of all PID —we can conclude that adults with PID are the most under-diagnosed category in Russia. This statement is confirmed by the low proportion of PAD defects in the Russian registry (21 vs. 56% in the ESID registry) . This, in turn, reflects low numbers of patients with CVID, the main PID affecting adults worldwide. The estimated prevalence of CVID in Russia is 0.2 per 100,000 people—whereas, in other registries, CVID prevalence reaches 1.3 per 100,000 people . In the recent years, Russia developed a relatively good network of pediatric immunologists, yet adult immunologists are scarce. NAEPID and the registry team have an educational and organizational plan aimed at improving adult PID diagnosis and care. The registry will be a good tool to assess success of the project in the next 5 years. Combined immunodeficiencies with syndromic features constitute the most prominent PID group in the registry (22%), presumably due to the well-defined phenotype and the high awareness of these disorders among various medical specialists. Patients with WAS and DGS have the shortest diagnostic delay and the highest proportion of genetic confirmation. Overall, the majority of genetic defects were confirmed in the clinically or analytically well-defined and well-described PID (HAE, WAS, XLA, CGD, and NBS). Most studies also have the highest genetic confirmation rate in the group of combined PID , though an Iranian study describes a predominance of genetic defects in the dysregulation group . The patients' distribution amongst PID groups differs from that of most published registries in other aspects, as well. Though PAD are underrepresented, we have relatively large groups of autoinflammatory disorders (AID) and complement defects (predominantly HAE). This is because the Russian PID database collects data on all IUIS classified PIDs, in contrast with some other countries—where AID cases are followed and reported predominantly by rheumatologists, and HAE cases predominantly by allergists . In our registry, HAE patients contribute 12% of all PID cases and have a high rate of genetic confirmation, though diagnostic delay in these cases is still quite high. Overall, diagnostic delay amongst the predominant forms of PID varied from 4 months in SCID—which is similar to data reported by others —to 141 months in XLA patients. Obviously, such long diagnostic delays lead to a number of unrecognized PAD deaths and contribute to the low proportion of humoral deficiencies in the registry. Diagnostic delay amongst NBS patients was shorter (median 2.5 years) than that reported previously in a smaller cohort of Russian NBS patients (median 5.0 years) . Yet the range of diagnostic delay is rather substantial: some patients were diagnosed as teenagers only after the onset of a malignancy, in spite of continuous follow-up by neurologists. Sadly, with the increase of PID diagnosis in the last 5 years, there has been no improvement in diagnostic delay. This, yet again, raises the question of neonatal screening. Wider availability of next-generation sequencing methods, which were routinely introduced in Russia only in 2017, may also change this dynamic. Unsurprisingly, 67% of the genetic defects in our cohort were detected via Sanger sequencing, in the most frequent and well-defined PID . A significant proportion of the mutations in the recently described genes were confirmed only with the advent of NGS techniques . NGS has allowed us to detect mutations in as many as 80 PID genes, sometimes with only one or a few patients per gene. The application of NGS to PID diagnosis has revolutionized the field by identifying novel disease-causing genes and allowing for the quick and relatively inexpensive detection of defects therein . Adult PIDs show a substantially lower rate of genetic confirmation than that seen in children. This is partially because genetic defects are often not found in CVID, even using NGS methods . Yet it also represents the fact that adults are less likely to pay for genetic testing since, in Russia, it is not covered by the state or by medical insurance. As described by others the majority of the genetic defects were found in males, due to the fact that a lot of the “old” PID have X-linked inheritance. In highly consanguineous populations, AR PIDs represent 70–90% of cases . Interestingly—though the Russian population is very heterogeneous, with low numbers of consanguineous marriages (45 families, 1.9%)—AR genetic defects comprised 30% of all defects described in the cohort, with 40% of these being homozygous for the respective mutations. This is due to the “founder effect,” known for affecting several PID genes in the Slavic population. The majority of NBS patients−74 (98.7%)—were homozygous for the “Slavic” mutation . A high frequency of the RAG1 c.256_257delAA p.K86fs mutation is also typical for Slavic populations, as previously noted . Other homozygous mutations were reported in patients with defects in the MEFV and AIRE genes, and known for the hot-spot mutations. . Our cohort included a group of patients with large aberrations, involving at least one PID gene. Therefore, we conclude that patients with complex phenotypes require implementation of not just Sanger sequencing and/or NGS methods, which can only indirectly point to a large aberration, but also cytogenetic methods, including CMA. Moreover, even well-described PIDs like HAE often require a combination of genetic methods, including MLPA, to detect large deletions frequent in this disease . Our first analysis of the Russian PID population demonstrates substantial genetic diversity and high rate of genetic diagnosis confirmation-−49% of all registered patients. This is comparable to 36–43% of genetic PID confirmation in patients from French and German registries . The importance of genetic defect verification cannot be underestimated, as it influences overall treatment approach (HSCT vs. conservative treatment) and targeted therapy validation. It is also crucial for prenatal/preimplantation testing—which, if implemented, allows families to have healthy children. This is especially important for families with currently incurable PIDs, like AT and some others. As previously published , the main treatment strategy for most PID patients (52% in the current study) is regular IG replacement. Additionally—in contrast to European data —the vast majority of patients in Russia are treated with IVIG, with only 4% of the patients having experience with subcutaneous IG replacement. Hence, IG substitution in Russia requires systemic modifications, i.e., wider availability of SCIG and home IVIG infusions that are not available at this time. To our knowledge, the Russian PID registry is the first to analyze the use of monoclonal antibodies and other biologics in the treatment of PID symptoms. The number of patients treated with this kind of therapy in this cohort is rather high, reaching 25%. Finally, 12% of patients underwent curative treatment, predominantly HSCT—a number comparable to the German registry . The proportions of transplanted patients with phagocytic disorders and with immune dysregulation were also similar in both registries. Yet, in comparison with the German registry—where one third of all HSCT was performed in CID patients—the predominant HSCT group in Russia consisted of patients with syndromic PIDs (18%) This reflects a significant prevalence of NBS patients, for whom HSCT has shown to be a successful and safe treatment strategy . In conclusion, the current study has summarized the epidemiological features of PID patients in Russia and highlighted the main challenges for the diagnosis and treatment of patients with PID. As with all other rare disease registries, the Russian PID registry is a powerful tool—not just for data collection but also to help improve PID patient care, especially in the setting of a large country with highly diverse regional features. The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. This study was approved by the ethics committee of the Dmitry Rogachev National 474 Center of Pediatric Hematology, Oncology, and Immunology (approval No 2∋/2-20). All patients or their 475 legal guardians gave written informed consent for participation in the registry. All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. |
Selected microwave irradiation effectively inactivates airborne avian influenza A(H5N1) virus | d9582f9c-1812-43f3-8616-b9a18d245600 | 11735811 | Microbiology[mh] | The highly pathogenic avian influenza A(H5N1) virus, a member of the Influenzavirus A genus within the Orthomyxoviridae family, poses a significant global threat to both animal and human health , . Since the identification of a particular viral lineage in domestic geese in Guangdong, China, in 1996, this pathogen has led to severe epizootics in poultry populations throughout Asia, Europe, Africa, and North America . Of particular concern is the virus’s ability to cross the species barrier and infect humans, with sporadic cases documented in multiple countries following direct contact with infected animals – . Notably, the A(H5N1) virus exhibits a case fatality rate of approximately 50% in affected individuals . Although limited human-to-human transmission has been observed in a small number of family clusters , sustained community spread has not yet been reported. Additionally, this virus has been detected in cattle and their milk, posing a significant risk of transmission to humans , . The potential for A(H5N1) to evolve more efficient human-to-human transmission and trigger a global pandemic remains a serious concern for public health authorities worldwide , . Furthermore, the possibility of genetic reassortment among avian, swine, and human influenza viruses increases the risk of novel strains emerging with enhanced transmissibility . Exposure to infected poultry or contaminated environments is the primary risk factor for human infections, placing poultry workers, veterinarians, and healthcare workers at increased risk , . In human infections, A(H5N1) targets the alveoli of the lower respiratory tract , causing severe viral pneumonia accompanied by a profound dysregulation of the host cytokine response . In general, the clinical course is progressive, frequently resulting in acute respiratory distress syndrome and multi-organ failure . Consequently, an A(H5N1) outbreak could severely strain healthcare systems , , , highlighting the urgent need for measures to control infections in poultry and cattle to reduce the risk of human exposure. Electromagnetic radiation within the microwave spectrum, spanning frequencies from 300 MHz to 300 GHz, exhibits non-ionizing properties while possessing sufficient energy to induce molecular vibrations in matter , . Among the diverse applications of this vibrational excitation, the phenomenon of resonant energy transfer from microwave radiation to specific acoustic vibrational modes of viral particles has emerged as a promising avenue for non-chemical decontamination strategies – , − . This approach shows significant promise for real-time air decontamination, primarily aimed at mitigating the spread of airborne respiratory pathogens , . The mechanism exploits the confined acoustic dipolar mode of resonance in viral particles . When aerosolized viruses are exposed to microwave radiation at specific frequencies, energy is transferred to the virions’ confined acoustic vibrational modes , This energy transfer induces resonance, leading to structural disruption and subsequent inactivation of the viral particles , . Recent studies have demonstrated the effectiveness of this method against various strains of SARS-CoV-2, including the Wuhan, delta, and omicron variants , , . Experiments conducted in a controlled bioaerosol system revealed that exposure to microwave radiation resulted in an average reduction of 91.31% in viral titer across these variants . Comparable levels of inactivation were observed for the H1N1 influenza virus, achieving a 90% reduction in viral titer . However, notable differences in the optimal parameters for inactivation were identified between SARS-CoV-2 and H1N1 influenza virus . While SARS-CoV-2 showed sensitivity to frequencies up to 12 GHz, the H1N1 influenza virus exhibited susceptibility to higher frequencies, up to 16 GHz . The duration of exposure also emerged as a critical factor, with SARS-CoV-2 demonstrating a ten-fold reduction in infectivity within one minute , , while the H1N1 influenza virus required five minutes to achieve comparable levels of inactivation . Given the ongoing global concern regarding the A(H5N1) virus and its potential impact on animal and human health , , this study was designed to evaluate the efficacy of microwave radiation for inactivating aerosolized A(H5N1) virus particles. The investigation had two primary objectives. First, using a frequency step size approach, we sought to identify the most suitable frequency band to maximize virucidal activity against A(H5N1). Second, we examined how the microwave application time affected the inactivation elicited by microwave illumination. Experimental setup All tests were conducted in accordance with the established guidelines for exposure setups in biological experiments . The experimental protocol for investigating the effects of microwave radiation on aerosolized A(H5N1) virus comprised five principal stages. Initially, the virus was propagated in Madin-Darby canine kidney (MDCK) cells to generate a high-titer suspension suitable for subsequent aerosolization. This stage was crucial in ensuring a sufficient viral load for the experiments and maintaining consistency across trials. Following the preparatory phase, the viral suspension underwent a controlled aerosolization process to create a fine mist of airborne particles. This step simulated real-world conditions of A(H5N1) viral transmission and allowed for the assessment of microwave radiation effects on suspended viral particles. The aerosolized virus was then subjected to microwave exposure under rigorously controlled conditions. To elucidate the optimal parameters for viral inactivation, a systematic evaluation of various frequency bands was conducted. This approach facilitated the identification of specific frequency ranges that demonstrated maximal efficacy in viral inactivation. To further enhance our understanding of the microwave inactivation process, a temporal optimization study was performed. This experiment aimed to elucidate the relationship between exposure duration and inactivation efficacy, providing insights into the kinetics of A(H5N1) inactivation under microwave radiation. All tests were conducted in accordance with the established guidelines for exposure setups in biological experiments . The experimental protocol for investigating the effects of microwave radiation on aerosolized A(H5N1) virus comprised five principal stages. Initially, the virus was propagated in Madin-Darby canine kidney (MDCK) cells to generate a high-titer suspension suitable for subsequent aerosolization. This stage was crucial in ensuring a sufficient viral load for the experiments and maintaining consistency across trials. Following the preparatory phase, the viral suspension underwent a controlled aerosolization process to create a fine mist of airborne particles. This step simulated real-world conditions of A(H5N1) viral transmission and allowed for the assessment of microwave radiation effects on suspended viral particles. The aerosolized virus was then subjected to microwave exposure under rigorously controlled conditions. To elucidate the optimal parameters for viral inactivation, a systematic evaluation of various frequency bands was conducted. This approach facilitated the identification of specific frequency ranges that demonstrated maximal efficacy in viral inactivation. To further enhance our understanding of the microwave inactivation process, a temporal optimization study was performed. This experiment aimed to elucidate the relationship between exposure duration and inactivation efficacy, providing insights into the kinetics of A(H5N1) inactivation under microwave radiation. The propagation of the A(H5N1) virus was conducted in a Biosafety Level 3 (BSL-3) laboratory at ViroStatics srl, located within the Scientific and Technological Park Porto Conte Ricerche srl (Alghero, Italy). MDCK cells, obtained from the American Type Culture Collection (Manassas, VA, USA), were maintained at 37 °C in a 5% CO 2 atmosphere. The cells were cultured in Dulbecco’s Modified Eagle’s Medium (DMEM) supplemented with 10% fetal bovine serum (FBS; Biowest, Nuaillé, France), 1% penicillin/streptomycin antibiotic solution (Biowest), and 1% L-glutamine (Biowest). Following propagation of the live highly pathogenic avian influenza virus (H5N1) A/ck/Israel/65/10, infectious titers were quantified using the standard 50% tissue culture infectious dose (TCID 50 ) assay in MDCK cells. Upon achieving a high titer (> 1 × 10 5 TCID 50 /mL) of the A(H5N1) strain, a stock suspension was prepared for subsequent aerosolization experiments. All experimental procedures were carried out at a controlled temperature of 21 °C. Furthermore, the BSL-3 laboratory operator was blinded to the details of the virus inactivation protocol, ensuring that all tests were conducted under blinded conditions. The A(H5N1) viral suspension was aerosolized using a commercially available aerosol generator (Omron, Kyoto, Japan) to produce particles up to 1 μm in size within a 32 L plastic, air-proof container. This aerosolization process was designed to mimic the natural airborne transmission of the virus, simulating the droplets and aerosols that would be produced during respiratory events such as coughing, sneezing, or talking. The use of a sealed container ensured containment of the aerosolized virus for controlled exposure to the subsequent microwave treatment. The aerosolization process continued until the virus occupied the entire volume of the chamber, ensuring homogeneous distribution. The aerosolized A(H5N1) virus was subjected to microwave radiation generated by a radio frequency (RF) generator, a custom-designed apparatus previously described in detail , . In brief, this RF system was specifically engineered for controlled microwave radiation delivery in virus inactivation studies. The system comprised the following components: an ultra-wideband frequency-tunable synthesizer capable of operating across the C band to the Ku band, allowing for the testing of a broad range of frequencies; medium power and high-power microwave amplifiers to enhance signal strength; a digital variable attenuator to regulate output power; and embedded software written in C + + on the ESP32 platform using Visual Studio Code, which controlled all possible configurations of the RF components. The final power amplifier of the demonstrator employed cutting-edge 0.15 μm GaN on SiC solid-state high-electron-mobility transistor technology, capable of delivering up to 10 W across an ultra-wideband range. The transmitter’s RF output was connected to a horn antenna using an RF cable. This setup generated an electromagnetic field with a strength of 200 V/m in proximity to the antenna and 40 V/m at the vertices of the chamber containing the aerosolized virus. These values represent mean field amplitudes measured across the 8–16 GHz band under the antenna and averaged for all corners. These measurements were consistent with electromagnetic simulations performed at 8 GHz (Fig. ) using CST Studio Suite (Dassault systems, Seattle, WA, USA). Based on the measured field values, the power density distribution within the chamber was determined using the Poynting vector formulation under free-space conditions (characteristic impedance Z 0 = 377 Ω). The resultant power densities ranged from 4.24 W/m 2 at the chamber corners to 106.2 W/m 2 directly beneath the antenna. Notably, the whole-body exposure limit, considering the expected exposure paradigm as uncontrolled exposure in the air, is 10 W/m 2 averaged over 30 min, as per IEEE standards . Based on the measured mean value of 40 V/m at the chamber corners, the corresponding power density was calculated as follows: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:\frac{1600}{377}\frac{W}{{m}^{2}}\times\:\frac{10\:min}{30\:min}\:=1.4\:W/{m}^{2}\,{\rm(averaged\,over\,30min)}$$\end{document} The resulting power density represents 14.3% of the permissible exposure level when time-averaged over a 30-min interval. For the shorter exposure durations investigated in this study (1, 3, and 5 min), the normalized power densities correspond to 1.43%, 4.29%, and 7.14% of the IEEE standard limit , respectively. These values were derived by applying temporal scaling factors of 0.033, 0.1, and 0.167 (representing the ratio of exposure duration to the 30-min averaging period) to the reference power density. Therefore, the resultant exposure levels are substantially lower than the established thresholds known to cause thermal discomfort in humans. The designed setup enabled precise control over the frequency band and exposure time. Prior to each experiment, the system was calibrated using a broadband field meter to ensure accurate assessment of the electromagnetic field’s inactivation potency at specific frequencies . Optimization of the frequency band In a series of experiments designed to optimize the frequency band for viral inactivation, aerosolized viral samples were systematically exposed to microwave radiation. Each sample was subjected to a standardized 10-minute microwave exposure period. To comprehensively assess the inactivation efficacy of microwave radiation on aerosolized A(H5N1) virus, seven distinct frequency bands were evaluated: 8–10 GHz, 9–11 GHz, 10–12 GHz, 11–13 GHz, 12–14 GHz, 13–15 GHz, and 14–16 GHz. Tests for each frequency band were performed in duplicate. Control samples consisted of aerosolized viral specimens that were not subjected to microwave illumination, with the RF source remaining inactive for the entire 10-minute exposure duration. Following the microwave treatment, the irradiated aerosol was recovered through a process of active impingement . This recovery method involved the direct collection of the treated aerosol into a glass collector containing complete medium supplemented with 2% FBS. The glass collector was equipped with an inlet and tangential nozzles, facilitating the efficient suction of air from the plastic chamber when a vacuum was applied at a rate of 12 L/min. This collection process ensured the preservation of the treated viral particles for subsequent analysis and quantification of inactivation efficacy across the various frequency bands tested. Results were expressed using two complementary methods to ensure comprehensive representation of the experimental outcomes. First, mean virus titers were calculated based on the TCID 50 assay, providing a quantitative measure of viral infectivity. Second, viral inactivation ratios were computed by comparing the titers of illuminated samples to those of unilluminated controls. These ratios were presented as means derived from a minimum of two independent experiments. Optimization of the exposure time After identifying the optimal frequency band for inactivating aerosolized A(H5N1) virus, we investigated the influence of exposure duration on inactivation efficacy. Our objective was to reduce the total exposure time from the initial 10-minute duration employed in preceding experiments, with the primary aim of identifying the minimum microwave illumination time required for effective viral inactivation. To achieve this goal, we evaluated three distinct exposure durations: 5 min, 3 min, and 1 min within the previously determined optimal frequency band. Each duration was subjected to triplicate testing. Aerosolized virus samples were subjected to microwave radiation using these parameters, and the residual viral infectivity was subsequently assessed. To quantify the results, we calculated and reported mean virus titers from the TCID 50 assay. Additionally, we computed viral inactivation ratios by comparing microwave-exposed samples to unilluminated controls, based on at least triplicate experiments. Uncertainty quantification To ensure a reliable representation of viral inactivation, an uncertainty propagation method was applied. This approach systematically incorporates the variability inherent in both control and test measurements, yielding an interval of possible inactivation values. The boundaries of this range were determined using the following equations: minimum inactivation = \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:\frac{\left({\text{C}}_{\text{M}}-{\text{C}}_{\text{E}}\right)-({\text{T}}_{\text{M}}+{\text{T}}_{\text{E}})}{{\text{C}}_{\text{M}}-{\text{C}}_{\text{E}}}$$\end{document} and maximum inactivation = \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:\frac{\left({\text{C}}_{\text{M}}+{\text{C}}_{\text{E}}\right)-\left({\text{T}}_{\text{M}}-{\text{T}}_{\text{E}}\right)}{{\text{C}}_{\text{M}}+{\text{C}}_{\text{E}}}$$\end{document} , where C M represents the mean value of the control measurement, C E denotes the absolute error associated with the control measurement, T M signifies the mean value of the test measurement, and T E indicates the absolute error associated with the test measurement. The resulting uncertainty in viral inactivation was expressed as a range, defined by its calculated minimum and maximum values. In a series of experiments designed to optimize the frequency band for viral inactivation, aerosolized viral samples were systematically exposed to microwave radiation. Each sample was subjected to a standardized 10-minute microwave exposure period. To comprehensively assess the inactivation efficacy of microwave radiation on aerosolized A(H5N1) virus, seven distinct frequency bands were evaluated: 8–10 GHz, 9–11 GHz, 10–12 GHz, 11–13 GHz, 12–14 GHz, 13–15 GHz, and 14–16 GHz. Tests for each frequency band were performed in duplicate. Control samples consisted of aerosolized viral specimens that were not subjected to microwave illumination, with the RF source remaining inactive for the entire 10-minute exposure duration. Following the microwave treatment, the irradiated aerosol was recovered through a process of active impingement . This recovery method involved the direct collection of the treated aerosol into a glass collector containing complete medium supplemented with 2% FBS. The glass collector was equipped with an inlet and tangential nozzles, facilitating the efficient suction of air from the plastic chamber when a vacuum was applied at a rate of 12 L/min. This collection process ensured the preservation of the treated viral particles for subsequent analysis and quantification of inactivation efficacy across the various frequency bands tested. Results were expressed using two complementary methods to ensure comprehensive representation of the experimental outcomes. First, mean virus titers were calculated based on the TCID 50 assay, providing a quantitative measure of viral infectivity. Second, viral inactivation ratios were computed by comparing the titers of illuminated samples to those of unilluminated controls. These ratios were presented as means derived from a minimum of two independent experiments. After identifying the optimal frequency band for inactivating aerosolized A(H5N1) virus, we investigated the influence of exposure duration on inactivation efficacy. Our objective was to reduce the total exposure time from the initial 10-minute duration employed in preceding experiments, with the primary aim of identifying the minimum microwave illumination time required for effective viral inactivation. To achieve this goal, we evaluated three distinct exposure durations: 5 min, 3 min, and 1 min within the previously determined optimal frequency band. Each duration was subjected to triplicate testing. Aerosolized virus samples were subjected to microwave radiation using these parameters, and the residual viral infectivity was subsequently assessed. To quantify the results, we calculated and reported mean virus titers from the TCID 50 assay. Additionally, we computed viral inactivation ratios by comparing microwave-exposed samples to unilluminated controls, based on at least triplicate experiments. To ensure a reliable representation of viral inactivation, an uncertainty propagation method was applied. This approach systematically incorporates the variability inherent in both control and test measurements, yielding an interval of possible inactivation values. The boundaries of this range were determined using the following equations: minimum inactivation = \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:\frac{\left({\text{C}}_{\text{M}}-{\text{C}}_{\text{E}}\right)-({\text{T}}_{\text{M}}+{\text{T}}_{\text{E}})}{{\text{C}}_{\text{M}}-{\text{C}}_{\text{E}}}$$\end{document} and maximum inactivation = \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:\frac{\left({\text{C}}_{\text{M}}+{\text{C}}_{\text{E}}\right)-\left({\text{T}}_{\text{M}}-{\text{T}}_{\text{E}}\right)}{{\text{C}}_{\text{M}}+{\text{C}}_{\text{E}}}$$\end{document} , where C M represents the mean value of the control measurement, C E denotes the absolute error associated with the control measurement, T M signifies the mean value of the test measurement, and T E indicates the absolute error associated with the test measurement. The resulting uncertainty in viral inactivation was expressed as a range, defined by its calculated minimum and maximum values. Efficacy of microwave radiation for inactivating aerosolized a(H5N1) virus across different frequency bands The inactivation efficacy of microwave radiation on aerosolized A(H5N1) virus as a function of frequency band is summarized in Table . A non-linear frequency-dependent effect on viral inactivation was observed, with distinct patterns of efficacy across the tested frequency bands (Fig. ). The most pronounced inactivation was achieved in the 11–13 GHz frequency band, particularly within the 11–12 GHz band, leading to a mean reduction of 89% in viral titer (range: 88 − 90%). This was closely followed by the 8–10 GHz band, which demonstrated a mean reduction of 83% (range: 78 − 88%). The 10–12 GHz band also exhibited significant inactivation, with a mean reduction of 79% (range: 76 − 83%). Notably, the 9–11 GHz band displayed a somewhat lower, yet still substantial, inactivation efficacy with a mean reduction of 66% (range: 61 − 70%). In contrast, the higher frequency bands demonstrated a lack of inactivation efficacy. Specifically, the 12–14 GHz, 13–15 GHz, and 14–16 GHz ranges showed minimal to no reduction in viral titer compared to the unirradiated control. The mean viral titers for these frequency bands were comparable to or even slightly higher than the control, although this difference is likely within the margin of experimental error. Based on these findings and the previous success in achieving approximately 90% inactivation of SARS-CoV-2 , , a frequency band of 8–12 GHz was selected for further evaluation in optimizing exposure time experiments. This choice was motivated by the observed efficacy within this frequency range and its potential to effectively inactivate both A(H5N1) and SARS-CoV-2 viruses, suggesting a broader applicability of this approach across different viral pathogens. Time-dependent efficacy of microwave radiation for inactivating aerosolized a(H5N1) virus Following the optimization of the frequency band, we investigated the impact of exposure duration on the efficacy of microwave radiation for inactivating aerosolized A(H5N1) virus. The results, summarized in Table , demonstrate a clear time-dependent effect on viral inactivation within the 8–12 GHz frequency band. A positive correlation between exposure time and viral inactivation efficacy was observed (Fig. ). Notably, the 5-minute exposure period demonstrated optimal efficacy, yielding a mean viral titer reduction of 94% (range: 92 − 95%). The narrow range of outcomes indicates high consistency, suggesting that this duration was sufficient to achieve robust and reproducible viral inactivation. In contrast, a 3-minute exposure resulted in moderate inactivation, with a mean reduction of 58% (range: 29 − 74%). The broader range observed in this condition suggests greater variability in outcomes, potentially due to the exposure time approaching a critical threshold for effective inactivation. Notably, a brief 1-minute exposure yielded a mean reduction of 48% (range: 0 − 76%). The substantial variability in results, ranging from no effect to significant inactivation, indicates that this duration is insufficient to ensure consistent virucidal activity. The inactivation efficacy of microwave radiation on aerosolized A(H5N1) virus as a function of frequency band is summarized in Table . A non-linear frequency-dependent effect on viral inactivation was observed, with distinct patterns of efficacy across the tested frequency bands (Fig. ). The most pronounced inactivation was achieved in the 11–13 GHz frequency band, particularly within the 11–12 GHz band, leading to a mean reduction of 89% in viral titer (range: 88 − 90%). This was closely followed by the 8–10 GHz band, which demonstrated a mean reduction of 83% (range: 78 − 88%). The 10–12 GHz band also exhibited significant inactivation, with a mean reduction of 79% (range: 76 − 83%). Notably, the 9–11 GHz band displayed a somewhat lower, yet still substantial, inactivation efficacy with a mean reduction of 66% (range: 61 − 70%). In contrast, the higher frequency bands demonstrated a lack of inactivation efficacy. Specifically, the 12–14 GHz, 13–15 GHz, and 14–16 GHz ranges showed minimal to no reduction in viral titer compared to the unirradiated control. The mean viral titers for these frequency bands were comparable to or even slightly higher than the control, although this difference is likely within the margin of experimental error. Based on these findings and the previous success in achieving approximately 90% inactivation of SARS-CoV-2 , , a frequency band of 8–12 GHz was selected for further evaluation in optimizing exposure time experiments. This choice was motivated by the observed efficacy within this frequency range and its potential to effectively inactivate both A(H5N1) and SARS-CoV-2 viruses, suggesting a broader applicability of this approach across different viral pathogens. Following the optimization of the frequency band, we investigated the impact of exposure duration on the efficacy of microwave radiation for inactivating aerosolized A(H5N1) virus. The results, summarized in Table , demonstrate a clear time-dependent effect on viral inactivation within the 8–12 GHz frequency band. A positive correlation between exposure time and viral inactivation efficacy was observed (Fig. ). Notably, the 5-minute exposure period demonstrated optimal efficacy, yielding a mean viral titer reduction of 94% (range: 92 − 95%). The narrow range of outcomes indicates high consistency, suggesting that this duration was sufficient to achieve robust and reproducible viral inactivation. In contrast, a 3-minute exposure resulted in moderate inactivation, with a mean reduction of 58% (range: 29 − 74%). The broader range observed in this condition suggests greater variability in outcomes, potentially due to the exposure time approaching a critical threshold for effective inactivation. Notably, a brief 1-minute exposure yielded a mean reduction of 48% (range: 0 − 76%). The substantial variability in results, ranging from no effect to significant inactivation, indicates that this duration is insufficient to ensure consistent virucidal activity. Recent epidemics and pandemics of respiratory viruses – including the 2003 severe acute respiratory syndrome outbreak, the 2009 H1N1 influenza pandemic, the 2012 Middle East respiratory syndrome coronavirus outbreak, and the COVID-19 pandemic caused by SARS-CoV-2 – have necessitated a critical reassessment of existing strategies for viral containment and prevention. Notably, a recent Cochrane review suggested that relying exclusively on antiviral medications and vaccines may be insufficient to effectively interrupt or mitigate the spread of acute respiratory viruses . In response to these challenges, microwave inactivation of airborne microorganisms has emerged as a promising non-chemical technology for viral inactivation 17,19–21,23−25 . This is, to our knowledge, the first study to examine the inactivation efficacy of microwave illumination against aerosolized avian influenza A(H5N1) virus. Our research, aimed at optimizing the approach to maximize virucidal activity against an airborne pathogen currently under close surveillance , , , yielded two principal findings. First, our analysis revealed that the optimal frequency range for inactivating A(H5N1) in an aerosolized state lies between 11 and 13 GHz, resulting in a substantial mean reduction of 89% in viral titer. Second, we unequivocally demonstrated the time-dependent nature of A(H5N1) viral inactivation, revealing a positive correlation between exposure duration and inactivation efficacy. With respect to viral inactivation in response to the frequency range generated by the RF-wave emission system, A(H5N1) exhibited susceptibility up to 13 GHz. This threshold is lower than our previously observed value for the H1N1 human influenza virus (up to 16 GHz) but aligns with the sensitivity of SARS-CoV-2 (up to 12 GHz) . Consequently, for subsequent experiments aimed at verifying the effect of exposure time on viral inactivation, we selected an 8–12 GHz frequency band to encompass both A(H5N1) and SARS-CoV-2 susceptibility ranges. Importantly, the similar optimal frequency ranges observed for diverse viruses hint at a common biophysical basis for microwave susceptibility among enveloped viruses. This could potentially allow for the development of generalized microwave-based disinfection protocols effective against a wide range of viral threats , . In this regard, the effectiveness of the 8–12 GHz frequency band may be attributed to its resonance with the confined acoustic vibrational modes of viral particles, as proposed by the structure resonance energy transfer (SRET) model . This resonance effect likely induces structural disruptions in viral components, ultimately leading to loss of infectivity. The SRET mechanism, as described by Yang et al. , suggests that microwaves of the same frequency can resonantly excite the dipolar mode of the confined acoustic vibrations inside virions. This process, known as microwave resonant absorption, is influenced by various factors including the virus’s hydration level, surface charge, size, and surrounding media . Our current findings on the inactivation of aerosolized A(H5N1) virus align with previous studies demonstrating the virucidal effects of non-thermal microwaves against various virions in different media – including SARS-CoV-2 and H1N1 influenza virus in aerosol form , , SARS-CoV-2 in deionized water , H3N2 influenza virus and bovine coronavirus (BCoV) in aqueous solutions, human coronavirus HCoV-229E in culture medium , and the BCoV on dry surfaces . The time-dependent nature of A(H5N1) inactivation observed in our study corroborates previous observations showing time-dependent inactivation of both SARS-CoV-2 and H1N1 in aerosols using microwave illumination , . Accordingly, our current results demonstrate that longer exposure times lead to more consistent and effective viral inactivation, with the 5-minute exposure striking an optimal balance between high efficacy and practical application time. Several methodological limitations warrant careful consideration in interpreting the present findings. While our experimental setup was intentionally designed to closely simulate real-world conditions, we recognize that variables such as ambient humidity, temperature fluctuations, and the presence of organic matter may substantially influence inactivation efficacy. Importantly, the methodology employed in the present study differs from that of Banting et al. , who developed a system specifically optimized for precise control over electromagnetic field distribution using custom-designed microwave guide sections. However, their approach represents a significant departure from the dynamic and variable conditions typically encountered in fluid dynamics under real-world scenarios. In contrast, our study prioritized ecological validity by employing a setup that more accurately reflects realistic environmental conditions, despite the inherent challenges posed by variability in electric field intensity across temporal, spatial, and frequency domains. Our research demonstrates the successful inactivation of aerosolized viruses; however, further studies are needed to comprehensively evaluate this approach’s effectiveness against viruses in other matrices, such as surfaces and liquid media. We also acknowledge that, despite its overall effectiveness, our experimental framework has inherent limitations in quantifying process variability at intermediate stages. The system’s complexity, coupled with the lack of measurable data during the aerosolization and collection phases, hindered a quantitative analysis of these potential fluctuations. As a result, our analytical scope was necessarily restricted to the downstream harvest titration stage. In addition, non-uniform field distribution within the test box and unquantified dosimetric uncertainties posed further technical challenges that warrant attention in future investigations to enhance methodological precision and experimental reproducibility. Nevertheless, it is important to emphasize that the SRET methodology primarily relies on field amplitude and is not directly dependent on dosimetric evaluations. Finally, it should be noted that no positive controls were included in this study to quantify variance or confirm expected responses. To facilitate the advancement and validation of non-thermal microwave technology for viral inactivation, future studies should systematically address these methodological caveats. Given the ongoing challenges posed by emerging and re-emerging avian influenza threats , , , the investigation of non-thermal microwaves in real-world environments represents a crucial next step. For instance, the implementation of microwave emitters optimized against A(H5N1) could potentially provide continuous disinfection of circulating air in high-risk environments such as poultry farms, processing facilities, and veterinary clinics. This approach could significantly mitigate the risk of airborne transmission in these settings. This study demonstrates the efficacy of microwave radiation in inactivating aerosolized A(H5N1) virus. Notably, our results revealed a clear time-dependent effect on viral inactivation within the 8–12 GHz frequency band, with a 5-minute exposure demonstrating optimal efficacy. This exposure duration yielded the most consistent and effective viral inactivation, resulting in a mean viral titer reduction of 94% (range: 92–95%). These findings corroborate previous research on other enveloped viruses, indicating a shared biophysical foundation for microwave susceptibility. This commonality could pave the way for the development of broadly applicable disinfection protocols. While further research is needed to address limitations and explore real-world applications, this non-thermal microwave approach shows promise as a novel strategy for mitigating the spread of airborne viral pathogens, including the highly pathogenic A(H5N1) influenza virus. |
Validation of a new species for studying postoperative atrial fibrillation: Swine sterile pericarditis model | a60a0949-a734-4d4c-9cd3-d1d4de257162 | 11718713 | Internal Medicine[mh] | INTRODUCTION Postoperative atrial fibrillation (POAF) is the most common complication arising following open heart surgery, occurring in 30%–50% of patients with no prior history of AF. It is associated with increased in-hospital and 6-month mortality, as well as in-hospital morbidity, including hemodynamic compromise, heart failure, and stroke. POAF is no longer considered a transient one-time event as it is associated with an increased long-term vulnerability to the development of AF. , A recent clinical study showed that a longer duration of POAF is associated with worsened long-term survival. – The canine sterile pericarditis model associated with atrial inflammation is an experimental counterpart of POAF. Using that model, we demonstrated that epicardial inflammation and its proliferation occurring in the atria produces a loss of epicardial myocytes and an altered distribution of connexins 40 and 43. These changes are associated with non-uniform slowing of conduction, thus creating the vulnerable substrates for the initiation and maintenance of POAF. In addition, the inducibility of POAF in the canine model is consistent with the time course of atrial arrhythmias in patients after open heart surgery, both peaking 2–4 days after surgery. However, the use of canines for research is restricted by ethics committees in many countries, and social acceptance is declining. Recently, the swine has been increasingly used for cardiac research because it has similar physical and electrophysiological properties to humans, and is socially more accepted. , Although a recent study reported a model of sterile pericarditis-induced atrial myopathy in Aachen minipigs, the study failed to induce POAF during postoperative period (within 5 days) after open heart surgery. The purpose of this study is to validate the feasibility of the swine sterile pericarditis model as an experimental counterpart to study POAF. To do this, we compared the electrophysiologic data between the swine pericarditis model and previously published canine sterile pericarditis model for validation. METHODS Animal experimental protocols were approved by the Case Western Reserve University Institutional Animal Care and Use Committee (IACUC). All studies were performed by the guidelines specified by our IACUC, Department of Agriculture Animal Welfare Act, Public Health Service Policy on Humane Care and Use of Laboratory Animals, and Association for Assessment and Accreditation of Laboratory Animal Care International. 2.1 | The creation of the swine sterile pericarditis model Sterile pericarditis was created in seven domestic pigs weighing 35–60 kg (age 3–5 months). Under general anesthesia, the pigs underwent a right thoracotomy between the 4th and 5th rib in the 4th intercostal space. The heart was exposed and cradled in the pericardium using standard surgical techniques. The temporary bipolar pacing wire (Streamline 6495, Medtronic, MN) was secured into the posterior left atrium (PLA, between the right inferior pulmonary vein and coronary sinus, ). Pair of stainless steel wire electrodes coated with EDP polymer except for the tip were sutured to the right atrial appendage (RAA). Also, another electrode pair was sutured onto the right ventricle for monitoring during the conscious closed-chest state study. All three electrodes were brought out through the chest wall, and exteriorized posteriorly in the middle of the neck for use in pacing and recording. Then, the atrial surfaces were dusted with sterile talcum powder, and a double layer of gauze was placed on both the right and left atrial free wall. These steps create the irritant for the level of pericarditis required to have an effective arrhythmia model. The pericardiotomy was repaired, and the chest was closed in a standard fashion. Finally, antibiotics and analgesic agents were administered, and the pigs were allowed to recover. 2.2 | Electrophysiology study protocol In the conscious or anesthetized close-chest states, all pigs underwent basic electrophysiologic studies on two or more of postoperative days 1, 2, 3, or 4. The basic electrophysiologic study was performed as follows: baseline measurements were made to determine the stimulus threshold for atrial capture and atrial effective refractory period (AERP) at each electrode site (RAA and PLA). AERPs were determined at twice the capture threshold. All parameters were measured at pacing CLs of 400, 300, and 200 ms . 2.3 | POAF induction protocol POAF induction was attempted using rapid atrial pacing for 1–5 s performed from each atrial electrode site beginning at a cycle length (CL) of 120 ms, and decrementing by 2–5 ms until loss of capture or POAF is achieved . All pacing was performed with a pulse width of 1.8 ms, and a stimulus strength sufficient to obtain atrial capture. After POAF is induced, its duration is characterized. Sustained POAF was defined as more than 5 min. In both the conscious and anesthetized close-chest state, both electrophysiological study and POAF induction protocols for all study components were performed with a Bloom DTU stimulator (Bloom Electrophysiology, Denver, CO). Both the induction of and ensuing sustained POAF (>5 min) was recorded using Bard LabSystem PRO (Bard Electrophysiology, Lowell, MA), to record atrial electrograms (AEGs) from the bipolar electrodes placed at the RAA, PLA, and right ventricle. 2.4 | Statistical analysis Data are presented as the mean ± SD. Minitab (Minitab Inc., State College, PA) was used for statistical analyses. The student’s paired t-test (normally distributed variables) or a Wilcoxon signed rank test (non-normally distributed variables) was used to compare differences in the threshold and AERP in PLA and RAA of postoperative days. Normality was assessed using Jarque-Bera test. Also, the student’s t-test was used to compare differences between pigs and dogs in the AERP in PLA and RAA of postoperative days. A value of p ≤ .05 was considered statistically significant. The creation of the swine sterile pericarditis model Sterile pericarditis was created in seven domestic pigs weighing 35–60 kg (age 3–5 months). Under general anesthesia, the pigs underwent a right thoracotomy between the 4th and 5th rib in the 4th intercostal space. The heart was exposed and cradled in the pericardium using standard surgical techniques. The temporary bipolar pacing wire (Streamline 6495, Medtronic, MN) was secured into the posterior left atrium (PLA, between the right inferior pulmonary vein and coronary sinus, ). Pair of stainless steel wire electrodes coated with EDP polymer except for the tip were sutured to the right atrial appendage (RAA). Also, another electrode pair was sutured onto the right ventricle for monitoring during the conscious closed-chest state study. All three electrodes were brought out through the chest wall, and exteriorized posteriorly in the middle of the neck for use in pacing and recording. Then, the atrial surfaces were dusted with sterile talcum powder, and a double layer of gauze was placed on both the right and left atrial free wall. These steps create the irritant for the level of pericarditis required to have an effective arrhythmia model. The pericardiotomy was repaired, and the chest was closed in a standard fashion. Finally, antibiotics and analgesic agents were administered, and the pigs were allowed to recover. Electrophysiology study protocol In the conscious or anesthetized close-chest states, all pigs underwent basic electrophysiologic studies on two or more of postoperative days 1, 2, 3, or 4. The basic electrophysiologic study was performed as follows: baseline measurements were made to determine the stimulus threshold for atrial capture and atrial effective refractory period (AERP) at each electrode site (RAA and PLA). AERPs were determined at twice the capture threshold. All parameters were measured at pacing CLs of 400, 300, and 200 ms . POAF induction protocol POAF induction was attempted using rapid atrial pacing for 1–5 s performed from each atrial electrode site beginning at a cycle length (CL) of 120 ms, and decrementing by 2–5 ms until loss of capture or POAF is achieved . All pacing was performed with a pulse width of 1.8 ms, and a stimulus strength sufficient to obtain atrial capture. After POAF is induced, its duration is characterized. Sustained POAF was defined as more than 5 min. In both the conscious and anesthetized close-chest state, both electrophysiological study and POAF induction protocols for all study components were performed with a Bloom DTU stimulator (Bloom Electrophysiology, Denver, CO). Both the induction of and ensuing sustained POAF (>5 min) was recorded using Bard LabSystem PRO (Bard Electrophysiology, Lowell, MA), to record atrial electrograms (AEGs) from the bipolar electrodes placed at the RAA, PLA, and right ventricle. Statistical analysis Data are presented as the mean ± SD. Minitab (Minitab Inc., State College, PA) was used for statistical analyses. The student’s paired t-test (normally distributed variables) or a Wilcoxon signed rank test (non-normally distributed variables) was used to compare differences in the threshold and AERP in PLA and RAA of postoperative days. Normality was assessed using Jarque-Bera test. Also, the student’s t-test was used to compare differences between pigs and dogs in the AERP in PLA and RAA of postoperative days. A value of p ≤ .05 was considered statistically significant. RESULTS 3.1 | Electrophysiology properties (Threshold and AERP) on postoperative days 1–3 In the conscious and anesthetized close-chest states, the pacing threshold for capture was increased from postoperative day 1 to day 3 . Mean RAA and PLA atrial threshold from all pacing CLs on postoperative day 1 versus day 3 were as follows: 2 ± 0.1 to 3.3 ± 0.6 mA in the RAA, p ≤ .05; 2.5 ± 0.1 to 4.8 ± 0.2 mA in the PLA. Also, the AERP at the twice-threshold was significantly increased in both atria from postoperative day 1 to day 3 (118 ± 8 to 157 ± 16 ms in the RAA; 98 ± 4 to 124 ± 2 ms in the PLA, both p ≤ .05, ). shows the summary data (pig vs. dog) of AERP from RAA and PLA in each pacing CL (400, 300, and 200 ms) on postoperative days 1–3. Although the AERP at pacing CLs of 200 ms in the swine model was significantly shorter than the canine model on postoperative day 1, most AERP from the swine model was consistent with the canine model on postoperative days. 3.2 | Inducibility of sustained POAF in the closed-chest studies POAF induction was attempted on days 1–4 for 2 pigs, on days 1–3 for 3 pigs, and on days 1–2 for 2 pigs, for a total of 21 days of the study. On postoperative day 1, two pigs were not able to perform the POAF induction study in the conscious closed-chest state due to the pig condition. Induction of sustained POAF occurred in 43% (3/7 pigs, POAF CL range 74–142 ms). Attempting to induce POAF was performed for approximately 30 min depending on the pig’s disposition. One of the sustained POAF episodes spontaneously converted to atrial flutter, which is common after open heart surgery. Each episode is summarized in . Electrophysiology properties (Threshold and AERP) on postoperative days 1–3 In the conscious and anesthetized close-chest states, the pacing threshold for capture was increased from postoperative day 1 to day 3 . Mean RAA and PLA atrial threshold from all pacing CLs on postoperative day 1 versus day 3 were as follows: 2 ± 0.1 to 3.3 ± 0.6 mA in the RAA, p ≤ .05; 2.5 ± 0.1 to 4.8 ± 0.2 mA in the PLA. Also, the AERP at the twice-threshold was significantly increased in both atria from postoperative day 1 to day 3 (118 ± 8 to 157 ± 16 ms in the RAA; 98 ± 4 to 124 ± 2 ms in the PLA, both p ≤ .05, ). shows the summary data (pig vs. dog) of AERP from RAA and PLA in each pacing CL (400, 300, and 200 ms) on postoperative days 1–3. Although the AERP at pacing CLs of 200 ms in the swine model was significantly shorter than the canine model on postoperative day 1, most AERP from the swine model was consistent with the canine model on postoperative days. Inducibility of sustained POAF in the closed-chest studies POAF induction was attempted on days 1–4 for 2 pigs, on days 1–3 for 3 pigs, and on days 1–2 for 2 pigs, for a total of 21 days of the study. On postoperative day 1, two pigs were not able to perform the POAF induction study in the conscious closed-chest state due to the pig condition. Induction of sustained POAF occurred in 43% (3/7 pigs, POAF CL range 74–142 ms). Attempting to induce POAF was performed for approximately 30 min depending on the pig’s disposition. One of the sustained POAF episodes spontaneously converted to atrial flutter, which is common after open heart surgery. Each episode is summarized in . DISCUSSION Seven domestic pigs underwent surgery for the creation of sterile pericarditis. We performed all electrophysiological studies and POAF induction in the closed-chest state during postoperative days, the same environment as patients after open heart surgery. In the conscious and/or anesthetized closed-chest states on postoperative days 1, 2, 3, and/or 4, we demonstrated the progressive increase in pacing threshold for capture and AERP over time. The induction of sustained POAF occurred in 43%, consistent with prior observations in the canine sterile pericarditis model and patients with POAF after surgery. , Also, one of the sustained POAF episodes converted to atrial flutter. 4.1 | Swine versus canine sterile pericarditis model for studying POAF A canine sterile pericarditis model has been used to understand the mechanism of postoperative arrhythmias such as POAF or atrial flutter for onset, prevention, maintenance, and treatment. , , – Pericarditis is major contributor to the etiology of postoperative arrhythmias by creating an atrial substrate vulnerable to atrial arrhythmias. , – In the canine sterile pericarditis model, inflammation of the pericardium and myocardium is created by placing gauze dusted with talcum powder on the atrial epicardium, and our group has shown that it alters the myocyte architecture and gap junctions, which lead to changes in the electrophysiological properties such as AERP and conduction velocity. We are unaware of any other animal models of POAF, which are an experimental counterpart to pericarditis-related POAF following open heart surgery. POAF is usually transient or self-limiting, unlike other forms of AF, with a return to sinus rhythm once the atrial substrate, that is, postoperative pericarditis, resolves. The progressive increases in pacing thresholds and AERP over postoperative periods in the canine model are consistent with prior observations in patients with POAF after surgery. – In comparison with the swine sterile pericarditis model, all electrophysiologic data from the swine model were consistent with the canine sterile pericarditis model , , , with respect to (1) the range of both pacing threshold and AERP; (2) the progressive increase in threshold and AERP over time; (3) a 40%–50% incidence of POAF. In addition, the swine sterile pericarditis study can be performed in the conscious closed-chest state on postoperative days. A recent study showed a model of sterile pericarditis-induced atrial myopathy in Aachen minipigs, suggesting a model of structural remodeling in AF. The study demonstrated that atrial fibrosis resulting from the inflammation of the pericardium and myocardium preceded the induction of AF. In comparison with our study, the progressive increases in pacing thresholds over time are consistent. However, the study failed to induce POAF during postoperative period (within 5 days). The major differences compared to this study are weight (mini pigs [~20 kg] vs. farm pigs [~60 kg]) and POAF induction protocol (pacing at 2× threshold for 20 sec vs. pacing at more than 2× threshold for up to 5 sec). Perhaps the latter difference is most important because (1) pacing with a large stimulus strength is usually necessary, as the stimulus threshold for capture increases when pacing CLs decrease ; (2) pacing duration of 20 sec could initiate reentrant circuits then terminate them due to continuous pacing. 4.2 | Clinical implication Although POAF in the swine sterile pericarditis is not spontaneous, it has several clinical implications. Given the wide spectrum of etiologies and disease processes associated with POAF, a reproducible swine sterile pericarditis model is essential to target the inflammation of the pericardium and myocardium that underline the development of the POAF. In addition, it can be used to identify the inflammation-related mechanisms of POAF for its onset, maintenance, treatment, and prevention. A swine heart, which is similar in size and physical and electrophysiological properties to the human heart, may be useful for developing medical equipment and advanced technologies in the clinical setting for electrophysiology procedures. Therefore, the more socially accepted swine sterile pericarditis model may potentially be used to model clinically equivalent postoperative arrhythmias in humans. 4.3 | Limitations In addition to PLA and RAA, the canine sterile pericarditis model has an electrode site at Bachmann’s bundle (BB) for pacing and recording. The difficulty in accessing BB during the initial surgery to create the swine sterile pericarditis did not allow for suturing the temporary bipolar pacing wire for recording and pacing, which may limit the options for POAF induction. Also, we did not record ECG limb leads during the conscious closed-chest state due to the difficulty of keeping ECG cable connected to a minimally restrained pig. However, we recorded from the right ventricular site for monitoring. Finally, although the swine sterile pericarditis model was developed as an experimental counterpart of POAF after open heart surgery, the mechanism that sustains POAF in patients may have more than one mechanism responsible for its maintenance due to various comorbidities, different surgery types, and complication during surgery in patients. Swine versus canine sterile pericarditis model for studying POAF A canine sterile pericarditis model has been used to understand the mechanism of postoperative arrhythmias such as POAF or atrial flutter for onset, prevention, maintenance, and treatment. , , – Pericarditis is major contributor to the etiology of postoperative arrhythmias by creating an atrial substrate vulnerable to atrial arrhythmias. , – In the canine sterile pericarditis model, inflammation of the pericardium and myocardium is created by placing gauze dusted with talcum powder on the atrial epicardium, and our group has shown that it alters the myocyte architecture and gap junctions, which lead to changes in the electrophysiological properties such as AERP and conduction velocity. We are unaware of any other animal models of POAF, which are an experimental counterpart to pericarditis-related POAF following open heart surgery. POAF is usually transient or self-limiting, unlike other forms of AF, with a return to sinus rhythm once the atrial substrate, that is, postoperative pericarditis, resolves. The progressive increases in pacing thresholds and AERP over postoperative periods in the canine model are consistent with prior observations in patients with POAF after surgery. – In comparison with the swine sterile pericarditis model, all electrophysiologic data from the swine model were consistent with the canine sterile pericarditis model , , , with respect to (1) the range of both pacing threshold and AERP; (2) the progressive increase in threshold and AERP over time; (3) a 40%–50% incidence of POAF. In addition, the swine sterile pericarditis study can be performed in the conscious closed-chest state on postoperative days. A recent study showed a model of sterile pericarditis-induced atrial myopathy in Aachen minipigs, suggesting a model of structural remodeling in AF. The study demonstrated that atrial fibrosis resulting from the inflammation of the pericardium and myocardium preceded the induction of AF. In comparison with our study, the progressive increases in pacing thresholds over time are consistent. However, the study failed to induce POAF during postoperative period (within 5 days). The major differences compared to this study are weight (mini pigs [~20 kg] vs. farm pigs [~60 kg]) and POAF induction protocol (pacing at 2× threshold for 20 sec vs. pacing at more than 2× threshold for up to 5 sec). Perhaps the latter difference is most important because (1) pacing with a large stimulus strength is usually necessary, as the stimulus threshold for capture increases when pacing CLs decrease ; (2) pacing duration of 20 sec could initiate reentrant circuits then terminate them due to continuous pacing. Clinical implication Although POAF in the swine sterile pericarditis is not spontaneous, it has several clinical implications. Given the wide spectrum of etiologies and disease processes associated with POAF, a reproducible swine sterile pericarditis model is essential to target the inflammation of the pericardium and myocardium that underline the development of the POAF. In addition, it can be used to identify the inflammation-related mechanisms of POAF for its onset, maintenance, treatment, and prevention. A swine heart, which is similar in size and physical and electrophysiological properties to the human heart, may be useful for developing medical equipment and advanced technologies in the clinical setting for electrophysiology procedures. Therefore, the more socially accepted swine sterile pericarditis model may potentially be used to model clinically equivalent postoperative arrhythmias in humans. Limitations In addition to PLA and RAA, the canine sterile pericarditis model has an electrode site at Bachmann’s bundle (BB) for pacing and recording. The difficulty in accessing BB during the initial surgery to create the swine sterile pericarditis did not allow for suturing the temporary bipolar pacing wire for recording and pacing, which may limit the options for POAF induction. Also, we did not record ECG limb leads during the conscious closed-chest state due to the difficulty of keeping ECG cable connected to a minimally restrained pig. However, we recorded from the right ventricular site for monitoring. Finally, although the swine sterile pericarditis model was developed as an experimental counterpart of POAF after open heart surgery, the mechanism that sustains POAF in patients may have more than one mechanism responsible for its maintenance due to various comorbidities, different surgery types, and complication during surgery in patients. CONCLUSION A newly developed swine sterile pericarditis model demonstrated electrophysiologic properties consistent with the canine model. The induction of sustained POAF occurred in 43%, consistent with prior observations in the canine model and patients with POAF after surgery. Therefore, the swine model is feasible as an experimental counterpart to study POAF. |
How to be a Better Surgeon: The Evidence for Surgical Coaching | a751a5a0-6faa-4de2-886a-7714ea101426 | 11844329 | Surgical Procedures, Operative[mh] | Why Otolaryngology? Video‐based coaching is a surgical coaching tool where a surgeon's performance is recorded and reviewed with a coach using a goal‐based discussion. Video‐based coaching is associated with improvement in self‐assessment, and decreases in complication rates and postoperative emergency department visits. , Otolaryngology is uniquely poised to incorporate video‐based learning due to the various modalities through which we operate—endoscopic, microscopic, robotic, and open—which make recording easy. Efficiency is critical to hospital administrators and surgeons. Every extra minute in the operating room adds to cost and reduces surgical productivity. Surgical coaching is one approach recommended to improve operating room efficiency. One study found that coached surgeons decreased operative time for a sleeve gastrectomy by 14 minutes compared to uncoached surgeons. Additionally, coaching can aid in optimizing the use of “first‐assists” which can contribute to better performance and efficiency. Burnout is also prevalent, with up to 84% of academic otolaryngologists reporting burnout. Surgical coaching can improve efficiency and reduce complications that can decrease burnout; however, the time required to build a coaching relationship falls to the surgeon. These competing aspects (eg, time, vulnerability, and emotional investment) need to be acknowledged and studied further. Barriers to Surgical Coaching Surgeons are known for having strong egos, and some may consider the idea of a “coach” in the operating room as challenging to their skill‐set. However, understanding the confidentiality that exists with a coach, the goal‐oriented approach of each session, and the opportunity to provide 2‐way feedback help to combat this mindset. Finding a surgical coach may deter individuals from participating in a coaching program. The Academy for Surgical Coaching (ASC) provides training and certification to validate top‐notch coaching skills. While efficiency and improved outcomes certainly decrease cost, the set up for a surgical coaching program has its own associated costs. The ASC offers certification for $1000 and a program set‐up guide for $2500. Prior studies have utilized surgical coaching on a voluntary basis without disclosed payments to the participants. It is the undefined costs that include time taken away from clinical productivity to perform coaching sessions and purchasing of new recording devices that should be considered. Patient privacy is another aspect of surgical coaching, specifically for video‐based coaching, that should be addressed. This includes the need to obtain patient consent for recording. While this barrier may dissuade surgeons from obtaining videos, once patients understand surgical coaching, 95.7% of them consent. Surgeons should understand their individual institutional policies and consult with their Chief Informational Officer to proceed appropriately from a legal standpoint. Lastly, the logistics of creating the surgical coaching relationship can be restricting. Factors include arrangements for a coach to observe in the operating room, time assessing and/or editing a video, and the coaching session to review observations. Institutions and practices invest efforts into the wellness of their employees and improvement in value of care. In a systematic review, surgical coaching increased confidence in participating surgeons by aiding to learn new skills in 70% and refining existing skills in 30%. It is logical to invest in surgical coaching, as it allows for training and promotion of best surgical practice while normalizing this work. Video‐based coaching is a surgical coaching tool where a surgeon's performance is recorded and reviewed with a coach using a goal‐based discussion. Video‐based coaching is associated with improvement in self‐assessment, and decreases in complication rates and postoperative emergency department visits. , Otolaryngology is uniquely poised to incorporate video‐based learning due to the various modalities through which we operate—endoscopic, microscopic, robotic, and open—which make recording easy. Efficiency is critical to hospital administrators and surgeons. Every extra minute in the operating room adds to cost and reduces surgical productivity. Surgical coaching is one approach recommended to improve operating room efficiency. One study found that coached surgeons decreased operative time for a sleeve gastrectomy by 14 minutes compared to uncoached surgeons. Additionally, coaching can aid in optimizing the use of “first‐assists” which can contribute to better performance and efficiency. Burnout is also prevalent, with up to 84% of academic otolaryngologists reporting burnout. Surgical coaching can improve efficiency and reduce complications that can decrease burnout; however, the time required to build a coaching relationship falls to the surgeon. These competing aspects (eg, time, vulnerability, and emotional investment) need to be acknowledged and studied further. Surgeons are known for having strong egos, and some may consider the idea of a “coach” in the operating room as challenging to their skill‐set. However, understanding the confidentiality that exists with a coach, the goal‐oriented approach of each session, and the opportunity to provide 2‐way feedback help to combat this mindset. Finding a surgical coach may deter individuals from participating in a coaching program. The Academy for Surgical Coaching (ASC) provides training and certification to validate top‐notch coaching skills. While efficiency and improved outcomes certainly decrease cost, the set up for a surgical coaching program has its own associated costs. The ASC offers certification for $1000 and a program set‐up guide for $2500. Prior studies have utilized surgical coaching on a voluntary basis without disclosed payments to the participants. It is the undefined costs that include time taken away from clinical productivity to perform coaching sessions and purchasing of new recording devices that should be considered. Patient privacy is another aspect of surgical coaching, specifically for video‐based coaching, that should be addressed. This includes the need to obtain patient consent for recording. While this barrier may dissuade surgeons from obtaining videos, once patients understand surgical coaching, 95.7% of them consent. Surgeons should understand their individual institutional policies and consult with their Chief Informational Officer to proceed appropriately from a legal standpoint. Lastly, the logistics of creating the surgical coaching relationship can be restricting. Factors include arrangements for a coach to observe in the operating room, time assessing and/or editing a video, and the coaching session to review observations. Institutions and practices invest efforts into the wellness of their employees and improvement in value of care. In a systematic review, surgical coaching increased confidence in participating surgeons by aiding to learn new skills in 70% and refining existing skills in 30%. It is logical to invest in surgical coaching, as it allows for training and promotion of best surgical practice while normalizing this work. Next Steps? Otolaryngology is ripe with opportunities to incorporate surgical coaching for trainees and practicing surgeons with the opportunity to integrate artificial intelligence and machine learning advances. Technologies would allow for instrument tracking to determine efficiency, or development of surgical simulations and video‐based recordings with 3‐D printed models to prepare surgeons for the operating room. Important next steps include a focus on operative complications, efficiency changes, and identifying areas where surgical coaching programs are most helpful. Lastly, as surgeons, we focus most of our efforts on surgical outcomes; however, our clinical practice and patient rapport are equally as important. Having a “clinical surgical” coach who can observe and/or review recordings in the ambulatory setting is imperative to assure that patients and families are adequately informed about their treatment plans. A coach can also pick up on important details about body language, tone, eye contact, and patient engagement that we may not be able to recognize on our own. Surgical coaching is a beneficial method to optimize surgical performance in multiple surgical subspecialities. It is time to understand its place in otolaryngology. Otolaryngology is ripe with opportunities to incorporate surgical coaching for trainees and practicing surgeons with the opportunity to integrate artificial intelligence and machine learning advances. Technologies would allow for instrument tracking to determine efficiency, or development of surgical simulations and video‐based recordings with 3‐D printed models to prepare surgeons for the operating room. Important next steps include a focus on operative complications, efficiency changes, and identifying areas where surgical coaching programs are most helpful. Lastly, as surgeons, we focus most of our efforts on surgical outcomes; however, our clinical practice and patient rapport are equally as important. Having a “clinical surgical” coach who can observe and/or review recordings in the ambulatory setting is imperative to assure that patients and families are adequately informed about their treatment plans. A coach can also pick up on important details about body language, tone, eye contact, and patient engagement that we may not be able to recognize on our own. Surgical coaching is a beneficial method to optimize surgical performance in multiple surgical subspecialities. It is time to understand its place in otolaryngology. Reema Padia , conception, design, composition of manuscript; Cynthia Wang , design, composition of manuscript; LaKeisha Henry , design, composition of manuscript; Stacey L. Ishman , Design, composition of manuscript; Nausheen Jamal , conception, design, composition of manuscript. Competing interests Reema Padia, MD: Certified Member of the Academy of Surgical Coaching. Stacey L. Ishman, MD, MPH: Nyxoah research money, Ethicon consulting. All remaining authors have no conflicts to disclose. Funding source None. Reema Padia, MD: Certified Member of the Academy of Surgical Coaching. Stacey L. Ishman, MD, MPH: Nyxoah research money, Ethicon consulting. All remaining authors have no conflicts to disclose. None. |
Clinicopathological Appearance of Epidermal Growth-Factor-Containing Fibulin-like Extracellular Matrix Protein 1 Deposition in the Lower Gastrointestinal Tract: An Autopsy-Based Study | 71fcc88c-b73d-4061-a753-2049e04659f8 | 11277079 | Forensic Medicine[mh] | Amyloid epidermal growth-factor-containing fibulin-like extracellular matrix protein 1 (AEFEMP1) amyloidosis is an age-related disorder . Although it affects elastic fibers throughout the body, the most prevalent deposition is observed in the lower gastrointestinal tract . We recently reported a case of granulomatous-type enterocolic lymphocytic phlebitis resembling amyloid-beta-related angiitis associated with EFEMP1/AEFEMP deposition and demonstrated its indirect pathogenicity . However, in contrast to amyloid-beta , the direct pathological effects associated with EFEMP1/AEFEMP1 deposition are unclear . Because amyloid deposits exhibit different deposition patterns depending on the precursor protein, understanding their morphology is essential for amyloid typing . Although 42 amyloid precursor proteins are known , the pattern of deposition around elastic fibers is characteristic of AEFEMP1 . However, the details of characteristic deposition patterns, especially which elastic fibers in which areas are most frequently affected, are unknown. Demonstrating Congo red-positive structures exhibiting apple-green birefringence under polarized light is crucial for the histopathological diagnosis of amyloid. However, it is unclear how often this can be demonstrated in EFEMP1 deposits, which show only weak congophilia . Den Braber-Ymker et al. reported that intestinal involvement in amyloidosis, including amyloid-A-derived and immunoglobulin-light-chain-derived amyloidosis, is sequential, and involvement of the muscular layer and the subsequent loss of myenteric interstitial cells of Cajal may lead to dysmotility . Because deposition in the muscular layer and around the Auerbach plexus is one of the characteristic deposition patterns of EFEMP1/AEFEMP1 , EFEMP1/AEFEMP1 deposition may cause dysmotility and constipation. In particular, AEFEMP1 deposition is closely associated with aging and is hypothesized to be a cause of lower gastrointestinal tract disorders, especially in older adults . However, studies on the association between constipation and EFEMP1/AEFEMP1 deposition are lacking. Amyloid formation and localization in a specific tissue may be triggered by fibril nuclei or seeds, a phenomenon known as seeding . We have reported the co-deposition of amyloid transthyretin (ATTR) and amyloid atrial natriuretic factor—both age-related amyloidosis —in the atrium , suggesting the presence of seeding between these amyloid deposits. Because ATTR deposition frequently involves the gastrointestinal tract , AEFEMP1 deposition may colocalize with ATTR deposition. According to Tasaki et al., ATTR and AEFEMP1 deposits do not colocalize in the colon . However, as only one case was evaluated, the colocalization of the two amyloid deposits remains unclear. Therefore, we conducted histopathological examinations of several samples from elderly cases (≥80 years old). We focused on the morphologies and distribution patterns of EFEMP1/AEFEMP1 deposits in each histological structure of the lower gastrointestinal tract. We examined the prevalence of cases where EFEMP1/AEFEMP1 deposition was presumed to be the cause of constipation. Moreover, we assessed whether colocalization of EFEMP1/AEFEMP1 and ATTR deposition is observed in the lower gastrointestinal tract. 2.1. Clinical Profiles and Demographics In total, 41 cases were identified. One case was excluded due to death by burning as the tissue was determined to be thermally denatured and unsuitable for analysis. Consequently, 40 cases remained eligible for histopathological study. Specimens of the colon and small intestine were available for 40 and 22 cases, respectively. Clinical and pathological data for these cases are summarized in . shows detailed clinical and histopathological findings. Overall, 5 patients in the small intestine case group and 10 in the colon case group had a medical history of constipation and/or were taking laxatives, including magnesium oxide, sennoside, and lubiprostone, and/or showed pseudomelanosis coli, which suggested the presence of constipation . One patient had diverticulosis. One patient had incidental adenocarcinoma in the ascending colon, and one patient died due to perforation of the rectum due to adenocarcinoma. No statistically significant differences in clinical or histopathological data or cause of death were found between the two groups. 2.2. Pathological Findings of EFEMP1 and AEFEMP1 The results of semiquantitative analysis in the small intestine and colon are summarized in , and all findings are provided in . EFEMP1/AEFEMP1 deposition was observed around elastic fibers in blood vessels and the interstitium in both organs; vascular deposition was more severe in the subserosa than in the submucosa. EFEMP1 deposition in submucosal vessels, subserosal interstitium, and serosa was significantly greater in the small intestine than in the colon. The total immunohistochemistry (IHC) score was higher in the small intestine than in the colon, although differences were statistically insignificant. However, EFEMP1 deposition in the mucosal interstitium and around the Auerbach plexus was significantly greater in the colon than in the small intestine. Furthermore, the characteristic elastofibrotic lesion extending from the longitudinal muscular layer to the subserosa was observed more frequently in the colon than in the small intestine ( a–h). Notably, apple-green birefringence was identifiable in the vascular and serosal deposits of the small intestine, while it was only identifiable in the vascular deposits in the colon. 2.3. Evaluation of the Relationship between ATTR and EFEMP1/AEFEMP1 Deposition Representative microphotographs of ATTR and EFEMP1/AEFEMP1 deposits are shown in . In patients with cardiac ATTR amyloidosis (ATTR-CA), the deposition rate was 75% (positive in 3/4 cases) in the small intestine and 57% (positive in 4/7 cases) in the colon. In the small intestine and colon, ATTR deposition was identified mainly in the vessels ( a,b), and vascular involvement was more severe in the submucosa than in the subserosa. Most ATTR deposits were observed on small-sized arteries and exhibited stronger congophilia and clear apple-green birefringence than EFEMP1/AEFEMP1 deposits ( c,d). After screening with Congo red staining, double IHC for transthyretin and EFEMP1 was performed in two cases (cases 6 and 23), which revealed moderate or high levels of ATTR and EFEMP1/AEFEMP1 deposition. Although most ATTR and EFEMP1/AEFEMP1 deposits were observed separately in the double-IHC specimens ( e,f), colocalization of ATTR and EFEMP1/AEFEMP1 occurred in some veins in the small intestine and colon ( g,h). 2.4. Clinicopathological Features of Patients with Constipation A comprehensive review of patients with confirmed or suspected constipation was conducted to assess contribution of EFEMP1/AEFEMP1 deposition to constipation. presents a summary of the clinical information and pathological findings for cases of constipation. Three cases received a clinical diagnosis of constipation (definite), whereas seven used laxatives and/or exhibited pseudomelanosis coli (possible). One case each had diabetes and chronic thyroiditis. ATTR deposition was not observed in the colon of any patient. Three cases presented with Lewy body disease (LBD), which was consistent with Braak’s LBD stage 4 or higher . LBD pathology outside the brain was identified in all cases, and one case showed LBD pathology in the colon (Case 11). Representative findings of LBD pathology in the gastrointestinal tract are shown in , and the distribution of LBD pathology outside the brain is summarized in . We hypothesize that EFEMP1/AEFEMP1 deposition could cause constipation if the patient had a median or higher total IHC score of ≥8 in the colon. Considering all available information, EFEMP1/AEFEMP1 deposition was presumed to be the sole cause of constipation in 4/10 cases, and in all three patients with colonic elastofibrosis. However, the cause of constipation was indefinite in 2/10 cases. In total, 41 cases were identified. One case was excluded due to death by burning as the tissue was determined to be thermally denatured and unsuitable for analysis. Consequently, 40 cases remained eligible for histopathological study. Specimens of the colon and small intestine were available for 40 and 22 cases, respectively. Clinical and pathological data for these cases are summarized in . shows detailed clinical and histopathological findings. Overall, 5 patients in the small intestine case group and 10 in the colon case group had a medical history of constipation and/or were taking laxatives, including magnesium oxide, sennoside, and lubiprostone, and/or showed pseudomelanosis coli, which suggested the presence of constipation . One patient had diverticulosis. One patient had incidental adenocarcinoma in the ascending colon, and one patient died due to perforation of the rectum due to adenocarcinoma. No statistically significant differences in clinical or histopathological data or cause of death were found between the two groups. The results of semiquantitative analysis in the small intestine and colon are summarized in , and all findings are provided in . EFEMP1/AEFEMP1 deposition was observed around elastic fibers in blood vessels and the interstitium in both organs; vascular deposition was more severe in the subserosa than in the submucosa. EFEMP1 deposition in submucosal vessels, subserosal interstitium, and serosa was significantly greater in the small intestine than in the colon. The total immunohistochemistry (IHC) score was higher in the small intestine than in the colon, although differences were statistically insignificant. However, EFEMP1 deposition in the mucosal interstitium and around the Auerbach plexus was significantly greater in the colon than in the small intestine. Furthermore, the characteristic elastofibrotic lesion extending from the longitudinal muscular layer to the subserosa was observed more frequently in the colon than in the small intestine ( a–h). Notably, apple-green birefringence was identifiable in the vascular and serosal deposits of the small intestine, while it was only identifiable in the vascular deposits in the colon. Representative microphotographs of ATTR and EFEMP1/AEFEMP1 deposits are shown in . In patients with cardiac ATTR amyloidosis (ATTR-CA), the deposition rate was 75% (positive in 3/4 cases) in the small intestine and 57% (positive in 4/7 cases) in the colon. In the small intestine and colon, ATTR deposition was identified mainly in the vessels ( a,b), and vascular involvement was more severe in the submucosa than in the subserosa. Most ATTR deposits were observed on small-sized arteries and exhibited stronger congophilia and clear apple-green birefringence than EFEMP1/AEFEMP1 deposits ( c,d). After screening with Congo red staining, double IHC for transthyretin and EFEMP1 was performed in two cases (cases 6 and 23), which revealed moderate or high levels of ATTR and EFEMP1/AEFEMP1 deposition. Although most ATTR and EFEMP1/AEFEMP1 deposits were observed separately in the double-IHC specimens ( e,f), colocalization of ATTR and EFEMP1/AEFEMP1 occurred in some veins in the small intestine and colon ( g,h). A comprehensive review of patients with confirmed or suspected constipation was conducted to assess contribution of EFEMP1/AEFEMP1 deposition to constipation. presents a summary of the clinical information and pathological findings for cases of constipation. Three cases received a clinical diagnosis of constipation (definite), whereas seven used laxatives and/or exhibited pseudomelanosis coli (possible). One case each had diabetes and chronic thyroiditis. ATTR deposition was not observed in the colon of any patient. Three cases presented with Lewy body disease (LBD), which was consistent with Braak’s LBD stage 4 or higher . LBD pathology outside the brain was identified in all cases, and one case showed LBD pathology in the colon (Case 11). Representative findings of LBD pathology in the gastrointestinal tract are shown in , and the distribution of LBD pathology outside the brain is summarized in . We hypothesize that EFEMP1/AEFEMP1 deposition could cause constipation if the patient had a median or higher total IHC score of ≥8 in the colon. Considering all available information, EFEMP1/AEFEMP1 deposition was presumed to be the sole cause of constipation in 4/10 cases, and in all three patients with colonic elastofibrosis. However, the cause of constipation was indefinite in 2/10 cases. Based on the results of the histopathological examination, EFEMP1 deposition in the small intestine was observed to initiate in association with elastic fiber formation in the submucosal and subserosal vessels, subserosal interstitium, and serosa (early stage), progressing into the muscularis propria and peri-Auerbach plexus area (intermediate stage), and diffusely spreading to other areas, excluding the mucosa and muscularis mucosae (advanced stage). A similar progression pattern was noted in the colon, with deposition in the subserosa interstitium being considerably less than that in the small intestine. During the middle-to-advanced stages, deposits exhibiting characteristic degeneration of elastic fibers and amyloid formation were presumed to occur. Notably, despite assessing all layers with autopsy material, apple-green birefringence was detectable under polarized light in approximately half and one-third of the cases in the small intestine and colon, respectively. Thus, the histopathological diagnosis of AEFEMP1 amyloidosis presents significant challenges, especially in biopsy specimens where the sampling area is usually limited to the submucosal layer. The difficulty might be higher in the presence of inflammation . To be aware of this deposition, it is crucial to identify the characteristic changes in the elastic plate through elastic fiber staining and confirm EFEMP1 with IHC. If the deposits exhibit weak congophilia and lack apple-green birefringence under polarized light, they should be designated “EFEMP1 deposition” and not “AEFEMP1 deposition”. The direct pathogenicity of EFEMP1/AEFEMP1 remains unknown. Tasaki et al. reported cases of gastrointestinal bleeding possibly caused by AEFEMP1 deposition that suggested an association between intestinal ischemia and vascular vulnerability. In a detailed histopathological study of a case with severe EFEMP1/AEFEMP1 deposits, we have reported that the deposits were found in all organs of the body and prominently in the lower gastrointestinal tract . EFEMP1 colocalized with fine elastic fibers but not with large elastic structures such as the elastic lamina in the aorta . Efemp1 − / − mice exhibited reduced numbers of elastic fibers in the fascia . Based on these findings, we hypothesize that EFEMP1 is most abundant around elastic fibers in the lower gastrointestinal tract; therefore, EFEMP1/AEFEMP1 deposition initiates around elastic fibers in the lower gastrointestinal tract and is most strongly impaired there. In this study, we suggest that EFEMP1/AEFEMP1 deposition is a potential cause of constipation. However, the link between histopathological severity and constipation remains unclear. Based on the results, it can be hypothesized that EFEMP1/AEFEMP1 deposition contributes to lower gastrointestinal dysfunction but may not sufficiently disable on its own to cause constipation. illustrates the dysfunctions and symptoms presumed to be caused by EFEMP1/AEFEMP1 based on our findings and other reports. To understand the direct pathogenicity of EFEMP1/AEFEMP1 deposition, further analysis using autopsy or surgical specimens with detailed clinical information is essential. Remarkably, EFEMP1 deposition accompanied by elastofibrosis was identified in the mucosa and outer layer of the muscularis propria to subserosa. This pathology was not readily apparent in hematoxylin and eosin (H&E)-stained specimens and was negative for Congo red staining. Consequently, the use of elastic fiber staining and EFEMP1 immunostaining is deemed indispensable for accurate diagnosis. Elastofibrosis in the gastrointestinal tract has been predominantly documented within polypoid lesions, although instances of diffuse nonpolypoid lesions have been reported . The histopathology outlined in the report of Schiffman et al. aligns with the elastic fiber alterations associated with EFEMP1 deposition . Thus, cases categorized as elastofibrosis might be instances where EFEMP1 deposition is the underlying cause. The pathological implications of elastofibrosis and the temporal relationship between elastic fiber proliferation and EFEMP1 deposition (i.e., which initiates the other) are unclear and necessitate further investigation. To our knowledge, this report is the first report to show colocalization of ATTR and EFEMP1/AEFEMP1 deposits. However, instances of colocalization were limited in number, and the presence of synergistic interactions between these deposits was unclear. There is growing consensus that in the central nervous system, a combination of one or more proteinopathies (mixed pathology) frequently manifest in individuals with neurodegenerative diseases, demonstrating synergistic interactions between these deposits . Because many proteinopathies stem from age-related amyloid deposition , assessing the mixed pathology of age-related proteins in organs beyond the brain may become increasingly vital. Considering that different amyloid precursor proteins often lead to distinct deposition patterns in various organs and forms of deposits, a comprehensive understanding of the deposition patterns of each amyloid type is imperative for accurate diagnosis. In addition to certain bias in our study population, this study was constrained by incomplete clinical information for some patients, such as the presence or absence of constipation and medication history, primarily due to the absence of severe clinical symptoms. Particularly, there was a lack of detail and availability of diagnoses for neurodegenerative diseases, including LBD. However, we believe that our detailed neuropathological evaluation , which followed current diagnostic guidelines, had minimized the possibility that we might overlook any neurodegenerative diseases that could contribute to constipation. Detailed information was unavailable for the vaccination status and history of COVID-19 infection in the analyzed cases. Degeneration was more pronounced in muscularis mucosae in autopsy material than in surgical material, potentially impacting the immunoreactivity of EFEMP1 in this region. Compared to amyloid deposits derived from other precursor proteins, EFEMP1/AEFEMP1 deposits exhibited weak congophilia. The shape of the deposits varied across sites, necessitating the use of a complex semiquantitative grading system in this study. There was a lack of information on the histopathological deposition pattern of EFEMP1/AEFEMP1 deposits, and studies are warranted in the future. Consequently, it should be noted that the results of this study are only preliminary and do not directly prove the hypotheses presented in . Further analysis using surgical specimens with lesser degeneration and more-complete clinical information is warranted. The small sample size, with only 40 cases, was another limitation of this study. The validity of this study would be better demonstrated with a larger sample size and a more diverse population. In conclusion, this study presents a histopathological evolutionary pattern of EFEMP1/AEFEMP1 in the lower gastrointestinal tract, which is potentially associated with constipation in elderly adults. Furthermore, the findings revealed that EFEMP1/AEFEMP1 deposition colocalizes with ATTR deposition, although the colocalization of the two is presumed to be coincidental. Given the challenges of histopathological diagnosis on Congo red-stained specimens, we recommend the combined use of elastic fiber staining and IHC for EFEMP1 to prevent the overlooking of this deposition. Further analysis using cases with detailed clinical information is essential to understand the pathogenicity of EFEMP1/AEFEMP1 deposition and its relationship with other age-related amyloid deposits. 4.1. Case Selection We reviewed the archives of all medical autopsy patients in our department from February 2020 to January 2022. First, we selected 164 cases in whom all organs, including the brain, could be sampled. Of these, we selected those ≥80 years of age, considering that EFEMP1/AEFEMP1 is an age-related condition . We extracted cases where standard histopathologic studies based on H&E and elastica–Masson staining in general organs , neuropathologic studies based on Luxol fast blue/H&E staining and IHC , and Congo red staining and IHC-based amyloid typing were conducted in the heart . Patients’ demographic and clinical characteristics (including cause of death) were retrieved from the medical records of police examinations and contributions from family members or from the primary physician if a record indicated clinic visits. This study was approved by the Ethical Committee of Toyama University (I2020006) and performed according to the ethical standards outlined in the 1964 Declaration of Helsinki and its 2008 amendment. 4.2. Tissue Samples One block per organ was sampled from the lower gastrointestinal tract. Specimens were fixed in 20% buffered formalin and routinely embedded in paraffin. Then, 4 μm thick sections were cut and stained with H&E, elastica–Masson, or underwent IHC. Furthermore, 6 μm thick sections were cut and stained with phenol Congo red . 4.3. Semiquantitative Grading System for EFEMP1/AEFEMP1 Deposition Representative microphotographs displaying the deposition patterns of EFEMP1/AEFEMP1 are presented in . The severity of immunohistochemical findings related to EFEMP1/AEFEMP1 deposition was assessed semiquantitatively. The severity of EFEMP1/AEFEMP1 pathology within vessels in the submucosa and subserosa and interstitium in each histological layer of the lower gastrointestinal tract, including the mucosa and muscularis mucosae, muscularis propria, thee area around the Auerbach plexus, subserosa (including mesentery), and serosa, was graded using a four-point scoring system as follows: Vessel Grading: Grade 0: No vascular EFEMP1 deposition. Grade 1: Occasional vessels with EFEMP1 deposition without amyloid properties, usually not occupying the thickness of the entire wall. Grade 2: A moderate number of vessels with EFEMP1 deposition, some occupying the full thickness of the wall and may exhibit focal amyloid properties. Grade 3: Many vessels with EFEMP1 deposition, most occupying the full thickness of the wall and exhibiting focal amyloid properties. Interstitium Grading: Grade 0: No interstitial EFEMP1 deposition. Grade 1: A few EFEMP1 deposits in the interstitium occupying each low-power (×10 microscope objective) field. Grade 2: Moderate EFEMP1 deposits in the interstitium occupying each low-power (×10 microscope objective) field. Grade 3: Many EFEMP1 deposits in the interstitium occupy each low-power (×10 microscope objective) field, some exhibiting a massive and nodular deposition pattern. Subsequently, the sum of all IHC deposition grades was calculated (total IHC score). Congo red-positive structures, demonstrating typical apple-green birefringence under polarized light, were histologically confirmed as amyloid deposits. The representative microphotographs of this semiquantitative grading system are presented in . The severity of ATTR deposition on the vessels was assessed using the same grading system. 4.4. Single and Double IHC Single IHC was performed using primary antibodies against fibulin-3 (EFEMP1) (mouse, clone mab3-5, 1:2000, Santa Cruz, TX, USA), and phosphorylated α-synuclein (clone LB508, 1:500; Zymed, San Francisco, CA, USA) was performed in all cases. IHC was performed for prealbumin (transthyretin) (rabbit, clone EPR3219, 1:2000, Abcam, Cambridge, UK) in cases with positive ATTR-CA and a deposition pattern suspicious of ATTR deposition in the intestine and/or colon on Congo red-stained specimens. Antigen retrieval was performed using 98% formic acid for 1 min (EFEMP1 and transthyretin) or heat-mediated method using pH9 solution for 20 min (phosphorylated α-synuclein). Single IHC was performed using the Leica Bond-MAX automation system and Leica Refine detection kits (Leica Biosystems, Richmond, IL, USA), according to the manufacturer’s instructions. All sections were counterstained with hematoxylin. In cases of suspected colocalization of ATTR and EFEMP1/AEFEMP1 deposition in the small intestine and/or colon, double IHC was performed using the antibodies listed. First, IHC for EFEMP1 was performed using the same procedure as that for single IHC. After the first IHC, sections were incubated with 0.3% H 2 O 2 for 10 min and then incubated with primary antibodies against transthyretin (overnight, 4 °C). Signal was developed using the immunoenzyme polymer method (Histofine Simple Stain MAX PO Multi; Nichirei Biosciences, Tokyo, Japan) with the Vina Green Chromogen Kit (BioCore Medical Technologies, Gaithersburg, MD, USA) for 5 min. All sections were counterstained with hematoxylin. 4.5. Statistical Analysis Data were analyzed using IBM SPSS statistics version 29 (SPSS Inc., Chicago, IL, USA), and the threshold for statistical significance was set at p < 0.05. Fisher’s exact test was used for categorical variables (presence of symptoms, pathological findings, and cause of death). Ordinal variables (pathological scores) were compared using the Mann–Whitney U test. We reviewed the archives of all medical autopsy patients in our department from February 2020 to January 2022. First, we selected 164 cases in whom all organs, including the brain, could be sampled. Of these, we selected those ≥80 years of age, considering that EFEMP1/AEFEMP1 is an age-related condition . We extracted cases where standard histopathologic studies based on H&E and elastica–Masson staining in general organs , neuropathologic studies based on Luxol fast blue/H&E staining and IHC , and Congo red staining and IHC-based amyloid typing were conducted in the heart . Patients’ demographic and clinical characteristics (including cause of death) were retrieved from the medical records of police examinations and contributions from family members or from the primary physician if a record indicated clinic visits. This study was approved by the Ethical Committee of Toyama University (I2020006) and performed according to the ethical standards outlined in the 1964 Declaration of Helsinki and its 2008 amendment. One block per organ was sampled from the lower gastrointestinal tract. Specimens were fixed in 20% buffered formalin and routinely embedded in paraffin. Then, 4 μm thick sections were cut and stained with H&E, elastica–Masson, or underwent IHC. Furthermore, 6 μm thick sections were cut and stained with phenol Congo red . Representative microphotographs displaying the deposition patterns of EFEMP1/AEFEMP1 are presented in . The severity of immunohistochemical findings related to EFEMP1/AEFEMP1 deposition was assessed semiquantitatively. The severity of EFEMP1/AEFEMP1 pathology within vessels in the submucosa and subserosa and interstitium in each histological layer of the lower gastrointestinal tract, including the mucosa and muscularis mucosae, muscularis propria, thee area around the Auerbach plexus, subserosa (including mesentery), and serosa, was graded using a four-point scoring system as follows: Vessel Grading: Grade 0: No vascular EFEMP1 deposition. Grade 1: Occasional vessels with EFEMP1 deposition without amyloid properties, usually not occupying the thickness of the entire wall. Grade 2: A moderate number of vessels with EFEMP1 deposition, some occupying the full thickness of the wall and may exhibit focal amyloid properties. Grade 3: Many vessels with EFEMP1 deposition, most occupying the full thickness of the wall and exhibiting focal amyloid properties. Interstitium Grading: Grade 0: No interstitial EFEMP1 deposition. Grade 1: A few EFEMP1 deposits in the interstitium occupying each low-power (×10 microscope objective) field. Grade 2: Moderate EFEMP1 deposits in the interstitium occupying each low-power (×10 microscope objective) field. Grade 3: Many EFEMP1 deposits in the interstitium occupy each low-power (×10 microscope objective) field, some exhibiting a massive and nodular deposition pattern. Subsequently, the sum of all IHC deposition grades was calculated (total IHC score). Congo red-positive structures, demonstrating typical apple-green birefringence under polarized light, were histologically confirmed as amyloid deposits. The representative microphotographs of this semiquantitative grading system are presented in . The severity of ATTR deposition on the vessels was assessed using the same grading system. Single IHC was performed using primary antibodies against fibulin-3 (EFEMP1) (mouse, clone mab3-5, 1:2000, Santa Cruz, TX, USA), and phosphorylated α-synuclein (clone LB508, 1:500; Zymed, San Francisco, CA, USA) was performed in all cases. IHC was performed for prealbumin (transthyretin) (rabbit, clone EPR3219, 1:2000, Abcam, Cambridge, UK) in cases with positive ATTR-CA and a deposition pattern suspicious of ATTR deposition in the intestine and/or colon on Congo red-stained specimens. Antigen retrieval was performed using 98% formic acid for 1 min (EFEMP1 and transthyretin) or heat-mediated method using pH9 solution for 20 min (phosphorylated α-synuclein). Single IHC was performed using the Leica Bond-MAX automation system and Leica Refine detection kits (Leica Biosystems, Richmond, IL, USA), according to the manufacturer’s instructions. All sections were counterstained with hematoxylin. In cases of suspected colocalization of ATTR and EFEMP1/AEFEMP1 deposition in the small intestine and/or colon, double IHC was performed using the antibodies listed. First, IHC for EFEMP1 was performed using the same procedure as that for single IHC. After the first IHC, sections were incubated with 0.3% H 2 O 2 for 10 min and then incubated with primary antibodies against transthyretin (overnight, 4 °C). Signal was developed using the immunoenzyme polymer method (Histofine Simple Stain MAX PO Multi; Nichirei Biosciences, Tokyo, Japan) with the Vina Green Chromogen Kit (BioCore Medical Technologies, Gaithersburg, MD, USA) for 5 min. All sections were counterstained with hematoxylin. Data were analyzed using IBM SPSS statistics version 29 (SPSS Inc., Chicago, IL, USA), and the threshold for statistical significance was set at p < 0.05. Fisher’s exact test was used for categorical variables (presence of symptoms, pathological findings, and cause of death). Ordinal variables (pathological scores) were compared using the Mann–Whitney U test. |
Application of local volume reduction of the dorsal glans groove in the repair of hypospadias with small glans: A retrospective study | 92853a6a-fdda-4f3a-ad4d-59a1c8178f4a | 11828791 | Surgical Procedures, Operative[mh] | Hypospadias is categorized into proximal, midshaft, and distal types based on the urethral meatus . Glans less than 14 mm in diameter are referred to as small glans . The onlay island flap technique was first introduced by Duckett et al. in 1980 and has since gained widespread adoption globally. Approximately one-third of distal hypospadias cases are associated with small glans . A small glans has limited capacity to enclose a properly constructed neourethra, making its size a critical factor influencing the success of hypospadias surgery. Children with hypospadias complicated with small glans tend to have postoperative complications such as meatal stenosis, urethral stricture and glans dehiscence . Our team employed a technique involving local volume reduction of the dorsal glans groove to mitigate or minimize these complications. Study design In this retrospective study, surgical consent was routinely obtained from all patients prior to the procedure as part of standard clinical care. The consent included provisions for the use of clinical data for quality improvement and research purposes, in accordance with institutional policies. This study was conducted in compliance with the ethical guidelines for retrospective research and was approved by the Institutional Ethics Committee (Approval No. Med-Eth-Re [2023] 412). All patient data were anonymized to ensure privacy and confidentiality. The parents or guardians of participants who were younger than 18 years of age were required to provide informed consent before surgery. This retrospective study included patients with midshaft or proximal hypospadias and small glans (< 14 mm, Fig. ) who had not undergone urethral plate transection, treated between January 2017 and December 2020. Inclusion criteria were: age < 18 years, absence of urethral plate transection, small glans (< 14 mm), and no associated syndromes. Exclusion criteria included redo cases and preoperative testosterone treatment. The patients were categorized into two groups based on the surgical technique used. During the initial study period (January 2017–December 2018), onlay island flap urethroplasty was performed (Group 1). In the subsequent period (January 2019–December 2020), we adopted onlay island flap urethroplasty combined with local volume reduction of the dorsal glans groove (Group 2). All procedures were carried out by the same surgeon and surgical team. The HOSE scores and postoperative complication rates, including glans dehiscence, meatal stenosis, urethrocutaneous fistula, and urethral diverticula, were recorded and compared between the two groups. The HOSE scoring system evaluates key factors such as the meatal location (tip of glans/proximal glans/coronal/penile shaft), meatus shape (vertical slit/ circular), urinary stream (single/spraying), penile curvature during erection (straight/mild angulation/moderate angulation/severe angulation), and the presence of urethrocutaneous fistula (number and location, if applicable) through direct observation. The maximum HOSE score is 16. Surgical success was defined as the absence of postoperative complications during the follow-up evaluation. Operative technique Onlay island flap urethroplasty The penis head was sutured longitudinally with 5/0 polypropylene for traction. Three millimeters below the corona, a circumferential incision was made without urethral plate transection. Two parallel longitudinal incisions were made from the ventral urethral orifice to the navicular fossa, the rear of the urethral orifice was connected into a "U" shape, the central urethral plate flap was retained, and both sides of the glanular wings were winglike. The procedure for inducing an artificial erection to ensure a straight penis was as follows. A tourniquet was placed at the root of the penis, and a butterfly needle was inserted into the tip of the penis. Normal saline solution was intermittently injected into the cavernosum during the operation to evaluate the degree of penile chordee. Penis was degloved till its root, the ventral fibrous tissue was excised, and an artificial erection was induced to ensure that the penis was satisfactorily straightened. Otherwise, the residual ventral curvature was corrected by dorsal tunica albuginea plication with 5/0 polypropylene sutures at 12 o’clock. A transverse pedicled island flap was obtained by first separating the inner mucosa layer of the prepuce from the outer cutaneous layer, rotating it to the ventral side, and then suturing it to the original urethral plate with running 6/0 polydioxanone sutures. The flap width and length differed across patients according to the location of the meatus and the characteristics of the urethral plate. The neourethra was then covered with subcutaneous tissues from the preputial flaps. The glans wings were fixed to the midline with 6/0 polydioxanone intermittent sutures. The ventral skin was closed with Byars’ flaps. Local volume reduction of the dorsal glans groove Parallel to the distal edge of the navicular fossa of the glans, a 1.5–2.0 mm skin tangent line (lines A and B) was made on the left and right sides of the dorsal 1.5 mm position, intersecting at the midline, and a skin tangent line (lines C and D) was made at the middle point of the penile head to the two lateral ends of the original tangent line (Fig. ). The skin inside the tangent line was removed with scissors, and approximately one-quarter of the corpus spongiosum of the glans was removed in the direction of the proximal penis (Fig. ). A 6–0 polydioxanone suture was used to suture the midline area of the wound, and the skin incisions on both sides were closed (Fig. shows the blue point sutured to the red point). After the operation, the urethral orifice appeared slit-like (Fig. ). Postoperative management The dressing was removed 48–72 h after surgery. On the third day after surgery, the child was discharged with a catheter, and the indwelling urethral catheter was maintained for 1 week after the operation. Intravenous antibiotics were given for 1 day after surgery, and then oral antibiotics were used until the catheter was removed. Follow-up All patients were followed for at least 12 months. Postoperative follow-up included observations of genital appearance and urination at 1, 3, 6 and 12 months postoperatively. A short urination video and a recent picture of the genitals were collected via WeChat if the children could not visit the outpatient clinic for a visit (Fig. ). Statistical analysis Statistical analyses were performed using SPSS version 25.0 software (IBM). The Mann‒Whitney U test was used for comparisons of nonnormally distributed variables between groups. Categorical variables were compared between the two groups using Fisher’s exact test or the chi-square test. P values less than 0.05 ( P < 0.05) indicated statistically significant differences. In this retrospective study, surgical consent was routinely obtained from all patients prior to the procedure as part of standard clinical care. The consent included provisions for the use of clinical data for quality improvement and research purposes, in accordance with institutional policies. This study was conducted in compliance with the ethical guidelines for retrospective research and was approved by the Institutional Ethics Committee (Approval No. Med-Eth-Re [2023] 412). All patient data were anonymized to ensure privacy and confidentiality. The parents or guardians of participants who were younger than 18 years of age were required to provide informed consent before surgery. This retrospective study included patients with midshaft or proximal hypospadias and small glans (< 14 mm, Fig. ) who had not undergone urethral plate transection, treated between January 2017 and December 2020. Inclusion criteria were: age < 18 years, absence of urethral plate transection, small glans (< 14 mm), and no associated syndromes. Exclusion criteria included redo cases and preoperative testosterone treatment. The patients were categorized into two groups based on the surgical technique used. During the initial study period (January 2017–December 2018), onlay island flap urethroplasty was performed (Group 1). In the subsequent period (January 2019–December 2020), we adopted onlay island flap urethroplasty combined with local volume reduction of the dorsal glans groove (Group 2). All procedures were carried out by the same surgeon and surgical team. The HOSE scores and postoperative complication rates, including glans dehiscence, meatal stenosis, urethrocutaneous fistula, and urethral diverticula, were recorded and compared between the two groups. The HOSE scoring system evaluates key factors such as the meatal location (tip of glans/proximal glans/coronal/penile shaft), meatus shape (vertical slit/ circular), urinary stream (single/spraying), penile curvature during erection (straight/mild angulation/moderate angulation/severe angulation), and the presence of urethrocutaneous fistula (number and location, if applicable) through direct observation. The maximum HOSE score is 16. Surgical success was defined as the absence of postoperative complications during the follow-up evaluation. Onlay island flap urethroplasty The penis head was sutured longitudinally with 5/0 polypropylene for traction. Three millimeters below the corona, a circumferential incision was made without urethral plate transection. Two parallel longitudinal incisions were made from the ventral urethral orifice to the navicular fossa, the rear of the urethral orifice was connected into a "U" shape, the central urethral plate flap was retained, and both sides of the glanular wings were winglike. The procedure for inducing an artificial erection to ensure a straight penis was as follows. A tourniquet was placed at the root of the penis, and a butterfly needle was inserted into the tip of the penis. Normal saline solution was intermittently injected into the cavernosum during the operation to evaluate the degree of penile chordee. Penis was degloved till its root, the ventral fibrous tissue was excised, and an artificial erection was induced to ensure that the penis was satisfactorily straightened. Otherwise, the residual ventral curvature was corrected by dorsal tunica albuginea plication with 5/0 polypropylene sutures at 12 o’clock. A transverse pedicled island flap was obtained by first separating the inner mucosa layer of the prepuce from the outer cutaneous layer, rotating it to the ventral side, and then suturing it to the original urethral plate with running 6/0 polydioxanone sutures. The flap width and length differed across patients according to the location of the meatus and the characteristics of the urethral plate. The neourethra was then covered with subcutaneous tissues from the preputial flaps. The glans wings were fixed to the midline with 6/0 polydioxanone intermittent sutures. The ventral skin was closed with Byars’ flaps. The penis head was sutured longitudinally with 5/0 polypropylene for traction. Three millimeters below the corona, a circumferential incision was made without urethral plate transection. Two parallel longitudinal incisions were made from the ventral urethral orifice to the navicular fossa, the rear of the urethral orifice was connected into a "U" shape, the central urethral plate flap was retained, and both sides of the glanular wings were winglike. The procedure for inducing an artificial erection to ensure a straight penis was as follows. A tourniquet was placed at the root of the penis, and a butterfly needle was inserted into the tip of the penis. Normal saline solution was intermittently injected into the cavernosum during the operation to evaluate the degree of penile chordee. Penis was degloved till its root, the ventral fibrous tissue was excised, and an artificial erection was induced to ensure that the penis was satisfactorily straightened. Otherwise, the residual ventral curvature was corrected by dorsal tunica albuginea plication with 5/0 polypropylene sutures at 12 o’clock. A transverse pedicled island flap was obtained by first separating the inner mucosa layer of the prepuce from the outer cutaneous layer, rotating it to the ventral side, and then suturing it to the original urethral plate with running 6/0 polydioxanone sutures. The flap width and length differed across patients according to the location of the meatus and the characteristics of the urethral plate. The neourethra was then covered with subcutaneous tissues from the preputial flaps. The glans wings were fixed to the midline with 6/0 polydioxanone intermittent sutures. The ventral skin was closed with Byars’ flaps. Parallel to the distal edge of the navicular fossa of the glans, a 1.5–2.0 mm skin tangent line (lines A and B) was made on the left and right sides of the dorsal 1.5 mm position, intersecting at the midline, and a skin tangent line (lines C and D) was made at the middle point of the penile head to the two lateral ends of the original tangent line (Fig. ). The skin inside the tangent line was removed with scissors, and approximately one-quarter of the corpus spongiosum of the glans was removed in the direction of the proximal penis (Fig. ). A 6–0 polydioxanone suture was used to suture the midline area of the wound, and the skin incisions on both sides were closed (Fig. shows the blue point sutured to the red point). After the operation, the urethral orifice appeared slit-like (Fig. ). The dressing was removed 48–72 h after surgery. On the third day after surgery, the child was discharged with a catheter, and the indwelling urethral catheter was maintained for 1 week after the operation. Intravenous antibiotics were given for 1 day after surgery, and then oral antibiotics were used until the catheter was removed. All patients were followed for at least 12 months. Postoperative follow-up included observations of genital appearance and urination at 1, 3, 6 and 12 months postoperatively. A short urination video and a recent picture of the genitals were collected via WeChat if the children could not visit the outpatient clinic for a visit (Fig. ). Statistical analyses were performed using SPSS version 25.0 software (IBM). The Mann‒Whitney U test was used for comparisons of nonnormally distributed variables between groups. Categorical variables were compared between the two groups using Fisher’s exact test or the chi-square test. P values less than 0.05 ( P < 0.05) indicated statistically significant differences. The ages of the patients in Group 1 ranged from 1 to 12 years, with the first quartile at 1.69 years, the median age at 2.83 years, and the third quartile at 4.38 years. The ages of the patients in Group 2 ranged from 1 to 12 years, with the first quartile at 1.67 years, the median age at 2.25 years, and the third quartile at 3.92 years. In this study, 107 children were included, Group 1 consisted of 57 patients (42 with midshaft hypospadias and 15 with proximal hypospadias), whereas Group 2 consisted of 50 patients (39 with midshaft hypospadias and 11 with proximal hypospadias); 38 out of 57 in Group 1 and 37 out of 50 in Group 2 required dorsal tunica albuginea plication for chordee correction. There were no significant differences in the preoperative characteristics of the patients, including age, type of hypospadias, glans width, or urethral length, between the two groups ( P > 0.05, Table ), indicating that the results of this clinical study are comparable. The average follow-up duration was 40 months in Group 1 and 32 months in Group 2. The overall success rates for Groups 1 and 2 were 65% (37/57) and 92% (46/50), respectively ( P = 0.0008) (Table ). In 2 patients in Group 1 (2/57, 4%), a single coronal urethrocutaneous fistula was observed; 1 patient healed spontaneously within 3 months, and the other patient underwent successful urethral fistula repair 6 months later. In 1 patient in Group 2 (1/50, 2%), a urethrocutaneous fistula developed subcoronally after catheter removal and healed spontaneously within three months. There was no significant difference in the incidence of urethrocutaneous fistula development between the two groups ( P = 1). Glans dehiscence occurred in 9 patients (9/57, 16%) in Group 1. Two patients underwent glansplasty, and seven patients underwent TIP in the 6th month after surgery. Glans dehiscence occurred in 2 patients (2/50, 4%) in Group 2, and glansplasty was performed in the 6th postoperative month. The incidence of glans dehiscence was lower in Group 2 ( P = 0.0451). Meatal stenosis occurred in 9 patients (9/57, 16%) in Group 1 and 1 patient (1/50, 2%) in Group 2. Three patients in Group 1 and one patient in Group 2 were conservatively managed with mometasone furoate cream external to the urethral orifice twice daily for one month, resulting in significant symptom relief. Four patients in Group 1 underwent weekly urethral dilation, with improvements noted after one month. Two patients in Group 1 achieved normal urination following urethral dilation and one-month indwelling catheter placement. None of the patients required a urethral stricture incision or urethrostomy. The incidence of meatal stenosis was significantly lower in Group 2 ( P = 0.0184). No urethral diverticulum occurred in either group. There were no significant differences in the HOSE assessment between the two groups. The scores ranged from 12 to 16 in Group 1 and from 12 to 16 in Group 2. Acceptable outcomes (≥ 14) were reported for 94 patients (88%) in the two groups, and there were no significant differences between Group 1 (48/57, 84%) and Group 2 (46/50, 92%) ( P = 0.219) (Table ). Most surgeons choose TIP urethroplasty for patients with distal hypospadias with mild or no chordee, and we plan to use the onlay island flap technique to treat midshaft and proximal hypospadias with small glans. The surgical treatment effect for hypospadias is good; patients obtain a satisfactory penile appearance, the slit-like urethral orifice is located at the head of the penis, the penis is straightened satisfactorily, the urinary line shows a horizontal parabola, adults can have a normal sexual life, and the incidence of postoperative complications is low. However, when hypospadias is combined with small glans, the difficulty of surgery increases, and the incidence of complications, especially meatal stenosis and glans dehiscence, increases significantly. We hope to discover a method to reduce the incidence of these complications. The literature on glans reduction techniques is limited. Qin et al. applied limited volume reduction of dorsal navicular fossa in the treatment of hypospadias, finding the technique to be safe and feasible, with the potential to improve the morphology of the external urethral opening and urine flow direction . Additionally, Qin et al. utilized cavernosum reduction technology in glanuloplasty in moderate-to-severe hypospadias repair requiring urethral plate division, believing that this approach can reduce postoperative complications . However, neither of these studies addressed the size of the glans, whereas the technique presented in this study is primarily applied to cases of hypospadias with small glans. The local volume reduction of the dorsal glans groove technique we adopted not only allowed the formed urethra to reach the glans head but also significantly reduced the occurrence of postoperative complications, especially postoperative meatal stenosis and glans dehiscence. The principle of this technique is to reduce the volume of the penile head moderately to create conditions for a more spacious outflow tract, which is associated with a lower likelihood of urethral stricture. In our study, the incidences of both meatal stenosis and glans dehiscence were significantly lower in Group 2 than in Group 1. Meatal stenosis was observed in 2% of patients in Group 2 and in 16% of those in Group 1. Other authors have reported incidence rates of meatal stenosis ranging from 0 to 14% in patients with hypospadias with small glans . Glans dehiscence was observed in 4% of patients in Group 2 and in 16% of those in Group 1. Other authors have reported that the incidence of glans dehiscence ranges from 3 to 23% in patients with hypospadias with small glans . In our study, there were no significant differences in HOSE scores between the two groups. Acceptable outcomes (score ≥ 14) were reported for 94 patients (88%) in the two groups, and there were no significant differences between Group 1 (48/57, 84%) and Group 2 (46/50, 92%) ( P = 0.219). Kurdi MO et al. compared hybrid Mathieu urethroplasty (HMU) and tubularized incised plate urethroplasty (TIPU) for managing distal hypospadias in patients with small glans, and there were no significant differences in HOSE scores between the two groups . For patients with hypospadias with small glans, to avoid postoperative complications, some scholars have only performed urethroplasty to the coronal sulcus rather than glanuloplasty. Khirallah M et al. reported that hybrid Mathieu urethroplasty is an effective and dependable approach for treating distal penile hypospadias, particularly in patients with small glans and a shallow urethral plate. This technique expands the eligibility of the Mathieu procedure, enhances the overall cosmetic outcomes, and maintains a reasonable complication rate . Kurdi MO et al. compared hybrid Mathieu urethroplasty (HMU) and tubularized insulated plate urethroplasty (TIPU) for managing distal hypospadias in patients with small glans. These findings suggest that HMU not only provides better outcomes but also involves a shorter stent duration and a lower incidence of complications than does TIPU . Perovic S introduced a method using a double-faced island flap and/or injection of a hydrogel to enlarge and sculpt small, deformed glans . The best size of the catheter for placement after hypospadias surgery remains a topic of discussion, especially for patients with small glans. Small glans cannot house the neourethra without tension if the catheter is too large. Some scholars believe that a catheter with a large diameter should be used as often as possible. Other scholars believe that a small (6 Fr) catheter can prevent meatal stenosis, whereas some practitioners choose the size of the catheter according to the age of the child. Our experience is to use transurethral diversion for urinary drainage, and we prefer to use a small silicone catheter, generally a 6 Foley catheter because the local volume reduction of the dorsal glans groove technique results in a wide external urethral opening; therefore, it is not necessary to use a large catheter to prevent meatal stenosis; moreover, a small catheter is also conducive to closing the glans wing without tension and decreasing the incidence of glans dehiscence. We encountered instances of urethral catheter blockage, some of which were resolved through bladder irrigation, while others necessitated catheter replacement. A glans width less than 14 mm is considered an independent risk factor for complications after hypospadias surgery . For decades, hormone therapy has been used before surgery to increase penis size and improve blood flow through penis tissues. Whether and what hormones should be used before surgery for patients with hypospadias with small glans remain controversial. Menon P et al. suggested that testosterone should be used with caution in children with distal hypospadias. Although testosterone therapy can increase the amount of available prepuce tissue, patients receiving this treatment are prone to postoperative infection and prepuce edema, which ultimately increases the chance of wound dehiscence . According to Mohammadipour A et al., preoperative hormone stimulation is not suitable for all children with hypospadias, and regular monitoring of hormone use and cessation of hormone therapy once the surgical requirements are met not only provides better surgical conditions but also reduces the incidence of androgen side effects . Gorduza D et al. reported that it was not possible to demonstrate whether hormones had any effect on reducing the incidence of postoperative complications of hypospadias surgery compared with the placebo . In a prospective study by Mittal S et al., the effect of preoperative testosterone on changes in the glans size of patients who underwent hypospadias surgery was quantified . Using preoperative androgen stimulation, Do MT et al. confirmed that penile length and glans width increased. However, the incidence of postoperative complications associated with preoperative androgen stimulation did not increase . None of the patients in either study group received hormonal stimulation, either preoperatively or postoperatively. In the future, we may consider exploring the use of androgens prior to surgery. This study has several limitations. First, the use of this technique during surgery results in the loss of glans corpus spongiosum tissue, particularly in patients with small glans, where the available tissue is already limited. Although the amount of tissue loss is relatively minor, it may still affect the sexual sensitivity of the glans, a concern that requires further evaluation. Second, this technique is specifically designed for hypospadias correction surgeries where the urethral plate is preserved and is not applicable to all types of hypospadias repairs. Third, the lack of long-term postoperative postoperative follow-up data on glans volume limits the ability to accurately assess the extent of volume reduction. Lastly, this study is a single-center retrospective study, and multicenter and large-sample randomized controlled clinical trials are needed to provide more robust and generalizable conclusions. Local volume reduction of the dorsal glans groove can effectively reduce the incidence of postoperative complications of urethroplasty in patients with hypospadias complicated with small glans. This technique is only used in selected patients and is not suitable for all types of hypospadias. |
Caregiver and Youth Characteristics That Influence Trust in Digital Health Platforms in Pediatric Care: Mixed Methods Study | e08208e1-cbbe-43b5-94f6-189fd13a001c | 11555442 | Pediatrics[mh] | Background Digital health can potentially advance the quintuple aim for health care improvement by enhancing the patient experience, improving population health, mitigating rising health care costs, reducing clinician burnout, and enabling health equity . Increased use of wearables that monitor health parameters, such as blood pressure, heart rate and rhythm, and interstitial glucose levels in real time produces vast amounts of patient-generated health data that, when combined with digital health platforms, can support remote patient monitoring, continuous (rather than episodic) care, and provide a more personalized care experience . Moreover, patient-generated health and wellness data repositories provide research and quality improvement opportunities. However, there are many obstacles to implementing digital health solutions . Studies investigating the public’s perspective on sharing digital health data for clinical care and research have reported concerns related to trust in data sharing, such as lack of anonymity, vulnerability to cyberattacks, and fear of data breaches leading to data misuse . Many authors have sought to define the criteria for trustworthiness in digital platforms and have revealed key themes, including ease of use and ease of platform use, personal recommendations from other known users, and safety and privacy protection measures . The reputation of digital providers and the quality of information are also perceived as fostering trust . Trustworthiness, however, is influenced by a range of sociocultural and political factors , yet few studies have measured their magnitude of influence. In addition, the use of artificial intelligence in medicine is increasing . However, patients have expressed concerns related to the possibility of misdiagnosis and privacy breaches , further highlighting the importance of understanding factors that promote trust in the design of digital health platforms. As a mechanism to enhance the patient experience and improve population health, we are developing a digital health platform (TrustSphere) for the secure sharing of patient-generated health data between patients and clinicians that enables a collaborative clinical care experience. This digital platform also provides opportunities for patients to share their patient-generated health data with researchers via a digitized consent process. Our first test use case is children living with type 1 diabetes (T1D), one of the most common childhood chronic diseases . T1D is characterized by absolute insulin deficiency resulting in impaired blood glucose level regulation and serious lifelong complications, such as cardiovascular disease, kidney failure, and blindness. To mitigate the risk of these complications, individuals living with T1D (and their caregivers) must carefully monitor glucose levels 24 hours a day. Modern diabetes technologies, such as continuous glucose monitoring systems have been a “game changer” where, instead of using a glucose meter that requires finger pricks 4 to 6 times per day, patients wear a sensor that sits just under their skin and measures glucose levels every 1 to 5 minutes, with the data “pushed” to a smart device in real time . Studies show that the use of continuous glucose monitoring systems compared with standard “finger prick” blood glucose monitoring resulted in significantly improved control of glucose levels in children and youth living with T1D . However, the integration of these patient-generated glucose levels with digital health platforms that support a collaborative clinical care experience and provide opportunities for patients to participate in research is lacking, partly due to a lack of digital trust. This Study We aimed to conduct a Canada and US-focused mixed methods study involving caregivers of children aged <18 years and youth aged 16 to 17 years to understand the relationship between sociodemographic characteristics (ie, sex, household income, level of education, rural vs urban locations, and experience with chronic disease) and “trust in” and “willingness to use” a digital platform to store and share personal health information (PHI) for clinical care and research. The United States and Canada are large North American countries with developed health care systems that are different yet share numerous similarities in their care models and associated challenges. Both nations grapple with escalating health care costs, inequitable access to care, and disparities in health outcomes. Moreover, there is a mounting level of concern in both countries regarding data security. We postulate that there will be differences in perspectives across different sociodemographic variables, and that understanding these differences will be important to consider in the design and prioritized features and functionalities of digital health platforms. Digital health can potentially advance the quintuple aim for health care improvement by enhancing the patient experience, improving population health, mitigating rising health care costs, reducing clinician burnout, and enabling health equity . Increased use of wearables that monitor health parameters, such as blood pressure, heart rate and rhythm, and interstitial glucose levels in real time produces vast amounts of patient-generated health data that, when combined with digital health platforms, can support remote patient monitoring, continuous (rather than episodic) care, and provide a more personalized care experience . Moreover, patient-generated health and wellness data repositories provide research and quality improvement opportunities. However, there are many obstacles to implementing digital health solutions . Studies investigating the public’s perspective on sharing digital health data for clinical care and research have reported concerns related to trust in data sharing, such as lack of anonymity, vulnerability to cyberattacks, and fear of data breaches leading to data misuse . Many authors have sought to define the criteria for trustworthiness in digital platforms and have revealed key themes, including ease of use and ease of platform use, personal recommendations from other known users, and safety and privacy protection measures . The reputation of digital providers and the quality of information are also perceived as fostering trust . Trustworthiness, however, is influenced by a range of sociocultural and political factors , yet few studies have measured their magnitude of influence. In addition, the use of artificial intelligence in medicine is increasing . However, patients have expressed concerns related to the possibility of misdiagnosis and privacy breaches , further highlighting the importance of understanding factors that promote trust in the design of digital health platforms. As a mechanism to enhance the patient experience and improve population health, we are developing a digital health platform (TrustSphere) for the secure sharing of patient-generated health data between patients and clinicians that enables a collaborative clinical care experience. This digital platform also provides opportunities for patients to share their patient-generated health data with researchers via a digitized consent process. Our first test use case is children living with type 1 diabetes (T1D), one of the most common childhood chronic diseases . T1D is characterized by absolute insulin deficiency resulting in impaired blood glucose level regulation and serious lifelong complications, such as cardiovascular disease, kidney failure, and blindness. To mitigate the risk of these complications, individuals living with T1D (and their caregivers) must carefully monitor glucose levels 24 hours a day. Modern diabetes technologies, such as continuous glucose monitoring systems have been a “game changer” where, instead of using a glucose meter that requires finger pricks 4 to 6 times per day, patients wear a sensor that sits just under their skin and measures glucose levels every 1 to 5 minutes, with the data “pushed” to a smart device in real time . Studies show that the use of continuous glucose monitoring systems compared with standard “finger prick” blood glucose monitoring resulted in significantly improved control of glucose levels in children and youth living with T1D . However, the integration of these patient-generated glucose levels with digital health platforms that support a collaborative clinical care experience and provide opportunities for patients to participate in research is lacking, partly due to a lack of digital trust. We aimed to conduct a Canada and US-focused mixed methods study involving caregivers of children aged <18 years and youth aged 16 to 17 years to understand the relationship between sociodemographic characteristics (ie, sex, household income, level of education, rural vs urban locations, and experience with chronic disease) and “trust in” and “willingness to use” a digital platform to store and share personal health information (PHI) for clinical care and research. The United States and Canada are large North American countries with developed health care systems that are different yet share numerous similarities in their care models and associated challenges. Both nations grapple with escalating health care costs, inequitable access to care, and disparities in health outcomes. Moreover, there is a mounting level of concern in both countries regarding data security. We postulate that there will be differences in perspectives across different sociodemographic variables, and that understanding these differences will be important to consider in the design and prioritized features and functionalities of digital health platforms. Study Population Population groups that were approached for this survey study included caregivers of youth aged <18 years living in Canada or the United States (excluding Mexico) and youth aged 16 to 17 years living in Canada. Caregivers of children and youth living with T1D accessing care at the BC Children’s Hospital Diabetes Clinic (Vancouver, BC) were also invited to participate. All survey respondents were offered the opportunity to participate in web-based bulletin board discussion groups that explored the topics of trust in data sharing. To be eligible for the web-based bulletin board discussion groups, participants had to be aged >18 years, living in Canada, have at least one healthy child or a child with a chronic disease who is aged <18 years, and be able to read, write, and understand English. Recruitment Survey respondents were invited through the following methods: First, caregivers and youth living in Canada and the United States were invited by Insights West, a Canadian marketing research company that maintains a panel of volunteers to electively participate in web-based surveys and focus groups, along with their trusted panel partners (Dynata and Maru/Blue) from their list of adult volunteers. The youth included in this study were the children of the caregiver survey respondents living in Canada and were given parental consent to participate in this study. The target sample size was 1000; 1028 adult panel members and 173 youth responded. Caregivers of children living with T1D and receiving care at the BC Children’s Hospital Diabetes Clinic were also recruited via a clinical registry. The survey invitation was sent to 232 caregivers and 100 responded, resulting in a response rate of 43%. No financial incentives or honorariums were offered for survey participation. The web-based bulletin board discussions were facilitated through 2 separate group discussions over 3 days in February 2021. A unique ID code identified caregiver participants who expressed interest in participating in the qualitative study on the quantitative survey; caregivers were not individually identifiable. Individuals who expressed interest were recontacted and asked additional screening questions (ie, age >18 years, living in Canada, have at least one child aged <18 years, and able to read, write, and understand English) before they were invited to participate. Invited caregiver participants then provided informed consent and received a link to the web-based bulletin board discussions. Participants in the bulletin board discussions were offered an honorarium of CAD $75 (US $57) in appreciation of their time. Data Collection Overview This mixed methods study was conducted from December 2020 to January 2021. The goal of the study was to gather caregiver and youth perspectives on elements of digital health delivery which included perspectives surrounding digital security, privacy and identity, ethics and informed consent, trust in digital health applications and platforms, sharing of digital health information, and perspectives on key features of an integrated digital platform for the delivery of clinical care and conduct of research. Quantitative Methods Survey Development As no existing published validated surveys exploring these questions were available, the survey questions used in this study were collaboratively developed by the research team by drawing upon existing literature while also applying their expertise in qualitative and quantitative research methodologies, clinical care, patient engagement, privacy, procedural and substantive ethics and consent, digital health, and health informatics. The survey questions were also reviewed by 2 physicians in the Division of Endocrinology at BC Children’s Hospital and one health informatics researcher with expertise in questionnaire development to ensure clarity and relevance to current clinical practice. The survey underwent rigorous refinement; however, it was not pilot-tested. The following description of a digital platform was provided in the survey: A secure online platform that will be customized for child and youth patients and their caregivers, and will integrate a patient’s health information such as diagnoses, medications and treatments, appointments, lab test results, wearable data (e.g. FitBit), etc. This platform would use secure and trusted digital identification, and follow the highest healthcare industry and public standards of privacy protection. The platform would help make it easier for children and families to access their health information and care plans, and to communicate directly with healthcare providers. It would also allow users to share their health information and care plans, if desired, with others involved in their child’s care, as well as donate their data confidentially for research. The final survey for adults and youth comprised 32 to 36 questions and 24 to 25 questions , respectively, depending on the responses and branching logic. The survey took 10 to 15 minutes to complete. The response options were predominantly Likert scales; however, some responses were binary, multiple selection, or rank order. There were no questions allowing for open-text responses. Respondents were able to skip questions, with no forced questions. All questions appeared on the screen except for branching questions that would only be displayed if relevant to the respondent’s prior answer. Survey Dissemination The main survey was provided to all caregivers, and a modified version of this survey was provided to youth, in which 10 caregiver-specific questions were removed. Insights West or its partners sent out invitations to participate via email. Invitations included a brief outline of the survey topic, the approximate time required to complete the survey, and a unique link to the web-based survey hosted by Insights West where each participant could submit a singular survey response. To protect anonymity, participant identifiers were kept separate from survey responses. The same survey as above was sent out by email by the clinic administrator to caregivers of children living with T1D and receiving care at BC Children’s Hospital Diabetes Clinic. No reminder emails were sent after the initial invitation. Qualitative Methods Discussion Guide Development The qualitative discussion guide was codeveloped by Insights West in collaboration with the study team. The web-based bulletin board discussion group included 26 questions about trust, data privacy, research, and whether families would use a digital health platform like TrustSphere. Of note, 23 of the original 26 discussion board questions were analyzed for this paper. The 3 discarded questions were unrelated to trust. Discussion Board Participants were asked to spend approximately 15 to 20 minutes per day answering questions over the course of 3 days. The total time was 45 to 60 minutes, and individuals could stop participating anytime. A moderator at Insights West monitored the discussion group daily, and follow-up questions were asked publicly to all participants or privately to specific participants as appropriate to probe for additional details. The moderator of the discussion board periodically met with the research team to review the discussion board and to guide moderation. The study team members could freely view the web-based bulletin board discussion and communicate with the moderator to guide probes. Transcripts of the written discussion questions, answers, and follow-up questions were recorded for qualitative analysis. Data Analysis Statistical Analysis Survey data were exported as an encrypted SPSS (IBM Corp) file and transferred to the research team through a secure file-sharing service. We used descriptive statistics to summarize respondent characteristics (both adult and youth) and to summarize responses to key questions around data storage, safety, trust, and use of a digital platform. The baseline characteristics (ie, age, gender, area of residence, level of education, and household income) of caregivers represented by respondents from Insights West and BC Children’s Hospital were similar, and therefore, caregiver data from both survey cohorts were amalgamated into a comprehensive “adult” category. The youth cohort was analyzed separately. The adult category was further subcategorized into adults with and without chronic disease and adults with and without a child with chronic disease. To assess the relationship between survey responses to key questions and priori selected sociodemographic variables, we used multivariable proportional odds logistic regression models. Missing data and responses for survey questions were recorded but were not included in statistical analysis. Results were summarized as odds ratios (ORs) and corresponding 95% CIs. Analyses were conducted using R (version 4.0.4; R Foundation for Statistical Computing) and Microsoft Excel. Qualitative Analysis Qualitative transcripts were transferred to the research team through secure file sharing. Data gathered from the 23 trust-related questions in the bulletin board discussions were analyzed using an inductive coding approach to identify common themes. Initial codes identified by 2 investigators (AV and HL) were discussed, consolidated, and used to independently analyze all transcripts. All coded data were then systematically reviewed by AV and HL to ensure agreement, after which inductive analysis was used to generate themes and subthemes . Ethical Considerations Ethics approval was obtained from the University of British Columbia/Children’s & Women’s Health Centre of British Columbia Research Ethics Board (approval number H20-03105, date of approval 2020-11-26, principal investigator: SA). Implied informed consent was used for surveys and discussion board participation. Findings were reported following the CROSS (Consensus-Based Checklist for Reporting of Survey Studies) checklist for quantitative data and the SRQR (Standards for Reporting Qualitative Research) checklist for reporting qualitative data for the bulletin board results as far as possible. The study data were deidentified with participant identifiers kept separate from survey responses. It is important to transparently state our team’s positionality. We come from diverse academic, cultural, and personal backgrounds, including different sex, races, ethnicities, and socioeconomic statuses. We acknowledge that our varying experiences and perspectives shape our approaches to methodology (ie, survey development) and data interpretation (ie, measures of socioeconomic status and comfort or trust with technology), and that privilege and bias may impact our work. We engaged in dialog and critical reflection to navigate these complexities ethically and responsibly to enhance the rigor of our research . Population groups that were approached for this survey study included caregivers of youth aged <18 years living in Canada or the United States (excluding Mexico) and youth aged 16 to 17 years living in Canada. Caregivers of children and youth living with T1D accessing care at the BC Children’s Hospital Diabetes Clinic (Vancouver, BC) were also invited to participate. All survey respondents were offered the opportunity to participate in web-based bulletin board discussion groups that explored the topics of trust in data sharing. To be eligible for the web-based bulletin board discussion groups, participants had to be aged >18 years, living in Canada, have at least one healthy child or a child with a chronic disease who is aged <18 years, and be able to read, write, and understand English. Survey respondents were invited through the following methods: First, caregivers and youth living in Canada and the United States were invited by Insights West, a Canadian marketing research company that maintains a panel of volunteers to electively participate in web-based surveys and focus groups, along with their trusted panel partners (Dynata and Maru/Blue) from their list of adult volunteers. The youth included in this study were the children of the caregiver survey respondents living in Canada and were given parental consent to participate in this study. The target sample size was 1000; 1028 adult panel members and 173 youth responded. Caregivers of children living with T1D and receiving care at the BC Children’s Hospital Diabetes Clinic were also recruited via a clinical registry. The survey invitation was sent to 232 caregivers and 100 responded, resulting in a response rate of 43%. No financial incentives or honorariums were offered for survey participation. The web-based bulletin board discussions were facilitated through 2 separate group discussions over 3 days in February 2021. A unique ID code identified caregiver participants who expressed interest in participating in the qualitative study on the quantitative survey; caregivers were not individually identifiable. Individuals who expressed interest were recontacted and asked additional screening questions (ie, age >18 years, living in Canada, have at least one child aged <18 years, and able to read, write, and understand English) before they were invited to participate. Invited caregiver participants then provided informed consent and received a link to the web-based bulletin board discussions. Participants in the bulletin board discussions were offered an honorarium of CAD $75 (US $57) in appreciation of their time. Overview This mixed methods study was conducted from December 2020 to January 2021. The goal of the study was to gather caregiver and youth perspectives on elements of digital health delivery which included perspectives surrounding digital security, privacy and identity, ethics and informed consent, trust in digital health applications and platforms, sharing of digital health information, and perspectives on key features of an integrated digital platform for the delivery of clinical care and conduct of research. Quantitative Methods Survey Development As no existing published validated surveys exploring these questions were available, the survey questions used in this study were collaboratively developed by the research team by drawing upon existing literature while also applying their expertise in qualitative and quantitative research methodologies, clinical care, patient engagement, privacy, procedural and substantive ethics and consent, digital health, and health informatics. The survey questions were also reviewed by 2 physicians in the Division of Endocrinology at BC Children’s Hospital and one health informatics researcher with expertise in questionnaire development to ensure clarity and relevance to current clinical practice. The survey underwent rigorous refinement; however, it was not pilot-tested. The following description of a digital platform was provided in the survey: A secure online platform that will be customized for child and youth patients and their caregivers, and will integrate a patient’s health information such as diagnoses, medications and treatments, appointments, lab test results, wearable data (e.g. FitBit), etc. This platform would use secure and trusted digital identification, and follow the highest healthcare industry and public standards of privacy protection. The platform would help make it easier for children and families to access their health information and care plans, and to communicate directly with healthcare providers. It would also allow users to share their health information and care plans, if desired, with others involved in their child’s care, as well as donate their data confidentially for research. The final survey for adults and youth comprised 32 to 36 questions and 24 to 25 questions , respectively, depending on the responses and branching logic. The survey took 10 to 15 minutes to complete. The response options were predominantly Likert scales; however, some responses were binary, multiple selection, or rank order. There were no questions allowing for open-text responses. Respondents were able to skip questions, with no forced questions. All questions appeared on the screen except for branching questions that would only be displayed if relevant to the respondent’s prior answer. Survey Dissemination The main survey was provided to all caregivers, and a modified version of this survey was provided to youth, in which 10 caregiver-specific questions were removed. Insights West or its partners sent out invitations to participate via email. Invitations included a brief outline of the survey topic, the approximate time required to complete the survey, and a unique link to the web-based survey hosted by Insights West where each participant could submit a singular survey response. To protect anonymity, participant identifiers were kept separate from survey responses. The same survey as above was sent out by email by the clinic administrator to caregivers of children living with T1D and receiving care at BC Children’s Hospital Diabetes Clinic. No reminder emails were sent after the initial invitation. Qualitative Methods Discussion Guide Development The qualitative discussion guide was codeveloped by Insights West in collaboration with the study team. The web-based bulletin board discussion group included 26 questions about trust, data privacy, research, and whether families would use a digital health platform like TrustSphere. Of note, 23 of the original 26 discussion board questions were analyzed for this paper. The 3 discarded questions were unrelated to trust. Discussion Board Participants were asked to spend approximately 15 to 20 minutes per day answering questions over the course of 3 days. The total time was 45 to 60 minutes, and individuals could stop participating anytime. A moderator at Insights West monitored the discussion group daily, and follow-up questions were asked publicly to all participants or privately to specific participants as appropriate to probe for additional details. The moderator of the discussion board periodically met with the research team to review the discussion board and to guide moderation. The study team members could freely view the web-based bulletin board discussion and communicate with the moderator to guide probes. Transcripts of the written discussion questions, answers, and follow-up questions were recorded for qualitative analysis. This mixed methods study was conducted from December 2020 to January 2021. The goal of the study was to gather caregiver and youth perspectives on elements of digital health delivery which included perspectives surrounding digital security, privacy and identity, ethics and informed consent, trust in digital health applications and platforms, sharing of digital health information, and perspectives on key features of an integrated digital platform for the delivery of clinical care and conduct of research. Survey Development As no existing published validated surveys exploring these questions were available, the survey questions used in this study were collaboratively developed by the research team by drawing upon existing literature while also applying their expertise in qualitative and quantitative research methodologies, clinical care, patient engagement, privacy, procedural and substantive ethics and consent, digital health, and health informatics. The survey questions were also reviewed by 2 physicians in the Division of Endocrinology at BC Children’s Hospital and one health informatics researcher with expertise in questionnaire development to ensure clarity and relevance to current clinical practice. The survey underwent rigorous refinement; however, it was not pilot-tested. The following description of a digital platform was provided in the survey: A secure online platform that will be customized for child and youth patients and their caregivers, and will integrate a patient’s health information such as diagnoses, medications and treatments, appointments, lab test results, wearable data (e.g. FitBit), etc. This platform would use secure and trusted digital identification, and follow the highest healthcare industry and public standards of privacy protection. The platform would help make it easier for children and families to access their health information and care plans, and to communicate directly with healthcare providers. It would also allow users to share their health information and care plans, if desired, with others involved in their child’s care, as well as donate their data confidentially for research. The final survey for adults and youth comprised 32 to 36 questions and 24 to 25 questions , respectively, depending on the responses and branching logic. The survey took 10 to 15 minutes to complete. The response options were predominantly Likert scales; however, some responses were binary, multiple selection, or rank order. There were no questions allowing for open-text responses. Respondents were able to skip questions, with no forced questions. All questions appeared on the screen except for branching questions that would only be displayed if relevant to the respondent’s prior answer. Survey Dissemination The main survey was provided to all caregivers, and a modified version of this survey was provided to youth, in which 10 caregiver-specific questions were removed. Insights West or its partners sent out invitations to participate via email. Invitations included a brief outline of the survey topic, the approximate time required to complete the survey, and a unique link to the web-based survey hosted by Insights West where each participant could submit a singular survey response. To protect anonymity, participant identifiers were kept separate from survey responses. The same survey as above was sent out by email by the clinic administrator to caregivers of children living with T1D and receiving care at BC Children’s Hospital Diabetes Clinic. No reminder emails were sent after the initial invitation. As no existing published validated surveys exploring these questions were available, the survey questions used in this study were collaboratively developed by the research team by drawing upon existing literature while also applying their expertise in qualitative and quantitative research methodologies, clinical care, patient engagement, privacy, procedural and substantive ethics and consent, digital health, and health informatics. The survey questions were also reviewed by 2 physicians in the Division of Endocrinology at BC Children’s Hospital and one health informatics researcher with expertise in questionnaire development to ensure clarity and relevance to current clinical practice. The survey underwent rigorous refinement; however, it was not pilot-tested. The following description of a digital platform was provided in the survey: A secure online platform that will be customized for child and youth patients and their caregivers, and will integrate a patient’s health information such as diagnoses, medications and treatments, appointments, lab test results, wearable data (e.g. FitBit), etc. This platform would use secure and trusted digital identification, and follow the highest healthcare industry and public standards of privacy protection. The platform would help make it easier for children and families to access their health information and care plans, and to communicate directly with healthcare providers. It would also allow users to share their health information and care plans, if desired, with others involved in their child’s care, as well as donate their data confidentially for research. The final survey for adults and youth comprised 32 to 36 questions and 24 to 25 questions , respectively, depending on the responses and branching logic. The survey took 10 to 15 minutes to complete. The response options were predominantly Likert scales; however, some responses were binary, multiple selection, or rank order. There were no questions allowing for open-text responses. Respondents were able to skip questions, with no forced questions. All questions appeared on the screen except for branching questions that would only be displayed if relevant to the respondent’s prior answer. The main survey was provided to all caregivers, and a modified version of this survey was provided to youth, in which 10 caregiver-specific questions were removed. Insights West or its partners sent out invitations to participate via email. Invitations included a brief outline of the survey topic, the approximate time required to complete the survey, and a unique link to the web-based survey hosted by Insights West where each participant could submit a singular survey response. To protect anonymity, participant identifiers were kept separate from survey responses. The same survey as above was sent out by email by the clinic administrator to caregivers of children living with T1D and receiving care at BC Children’s Hospital Diabetes Clinic. No reminder emails were sent after the initial invitation. Discussion Guide Development The qualitative discussion guide was codeveloped by Insights West in collaboration with the study team. The web-based bulletin board discussion group included 26 questions about trust, data privacy, research, and whether families would use a digital health platform like TrustSphere. Of note, 23 of the original 26 discussion board questions were analyzed for this paper. The 3 discarded questions were unrelated to trust. Discussion Board Participants were asked to spend approximately 15 to 20 minutes per day answering questions over the course of 3 days. The total time was 45 to 60 minutes, and individuals could stop participating anytime. A moderator at Insights West monitored the discussion group daily, and follow-up questions were asked publicly to all participants or privately to specific participants as appropriate to probe for additional details. The moderator of the discussion board periodically met with the research team to review the discussion board and to guide moderation. The study team members could freely view the web-based bulletin board discussion and communicate with the moderator to guide probes. Transcripts of the written discussion questions, answers, and follow-up questions were recorded for qualitative analysis. The qualitative discussion guide was codeveloped by Insights West in collaboration with the study team. The web-based bulletin board discussion group included 26 questions about trust, data privacy, research, and whether families would use a digital health platform like TrustSphere. Of note, 23 of the original 26 discussion board questions were analyzed for this paper. The 3 discarded questions were unrelated to trust. Participants were asked to spend approximately 15 to 20 minutes per day answering questions over the course of 3 days. The total time was 45 to 60 minutes, and individuals could stop participating anytime. A moderator at Insights West monitored the discussion group daily, and follow-up questions were asked publicly to all participants or privately to specific participants as appropriate to probe for additional details. The moderator of the discussion board periodically met with the research team to review the discussion board and to guide moderation. The study team members could freely view the web-based bulletin board discussion and communicate with the moderator to guide probes. Transcripts of the written discussion questions, answers, and follow-up questions were recorded for qualitative analysis. Statistical Analysis Survey data were exported as an encrypted SPSS (IBM Corp) file and transferred to the research team through a secure file-sharing service. We used descriptive statistics to summarize respondent characteristics (both adult and youth) and to summarize responses to key questions around data storage, safety, trust, and use of a digital platform. The baseline characteristics (ie, age, gender, area of residence, level of education, and household income) of caregivers represented by respondents from Insights West and BC Children’s Hospital were similar, and therefore, caregiver data from both survey cohorts were amalgamated into a comprehensive “adult” category. The youth cohort was analyzed separately. The adult category was further subcategorized into adults with and without chronic disease and adults with and without a child with chronic disease. To assess the relationship between survey responses to key questions and priori selected sociodemographic variables, we used multivariable proportional odds logistic regression models. Missing data and responses for survey questions were recorded but were not included in statistical analysis. Results were summarized as odds ratios (ORs) and corresponding 95% CIs. Analyses were conducted using R (version 4.0.4; R Foundation for Statistical Computing) and Microsoft Excel. Qualitative Analysis Qualitative transcripts were transferred to the research team through secure file sharing. Data gathered from the 23 trust-related questions in the bulletin board discussions were analyzed using an inductive coding approach to identify common themes. Initial codes identified by 2 investigators (AV and HL) were discussed, consolidated, and used to independently analyze all transcripts. All coded data were then systematically reviewed by AV and HL to ensure agreement, after which inductive analysis was used to generate themes and subthemes . Survey data were exported as an encrypted SPSS (IBM Corp) file and transferred to the research team through a secure file-sharing service. We used descriptive statistics to summarize respondent characteristics (both adult and youth) and to summarize responses to key questions around data storage, safety, trust, and use of a digital platform. The baseline characteristics (ie, age, gender, area of residence, level of education, and household income) of caregivers represented by respondents from Insights West and BC Children’s Hospital were similar, and therefore, caregiver data from both survey cohorts were amalgamated into a comprehensive “adult” category. The youth cohort was analyzed separately. The adult category was further subcategorized into adults with and without chronic disease and adults with and without a child with chronic disease. To assess the relationship between survey responses to key questions and priori selected sociodemographic variables, we used multivariable proportional odds logistic regression models. Missing data and responses for survey questions were recorded but were not included in statistical analysis. Results were summarized as odds ratios (ORs) and corresponding 95% CIs. Analyses were conducted using R (version 4.0.4; R Foundation for Statistical Computing) and Microsoft Excel. Qualitative transcripts were transferred to the research team through secure file sharing. Data gathered from the 23 trust-related questions in the bulletin board discussions were analyzed using an inductive coding approach to identify common themes. Initial codes identified by 2 investigators (AV and HL) were discussed, consolidated, and used to independently analyze all transcripts. All coded data were then systematically reviewed by AV and HL to ensure agreement, after which inductive analysis was used to generate themes and subthemes . Ethics approval was obtained from the University of British Columbia/Children’s & Women’s Health Centre of British Columbia Research Ethics Board (approval number H20-03105, date of approval 2020-11-26, principal investigator: SA). Implied informed consent was used for surveys and discussion board participation. Findings were reported following the CROSS (Consensus-Based Checklist for Reporting of Survey Studies) checklist for quantitative data and the SRQR (Standards for Reporting Qualitative Research) checklist for reporting qualitative data for the bulletin board results as far as possible. The study data were deidentified with participant identifiers kept separate from survey responses. It is important to transparently state our team’s positionality. We come from diverse academic, cultural, and personal backgrounds, including different sex, races, ethnicities, and socioeconomic statuses. We acknowledge that our varying experiences and perspectives shape our approaches to methodology (ie, survey development) and data interpretation (ie, measures of socioeconomic status and comfort or trust with technology), and that privilege and bias may impact our work. We engaged in dialog and critical reflection to navigate these complexities ethically and responsibly to enhance the rigor of our research . Quantitative Survey: Perspectives on Digital Security, Sharing of PHI, and the Value of a Digital Platform Overview Out of 1128 caregivers, 685 (60.7%) responded to all questions, with a slightly greater number of caregivers being located in Canada (n=603, 53.5%) versus the United States (n=522, 46.2%). Among 173 youth, 129 (74.5%) responded to all questions. All the youth were from Canada. Among the 1128 caregivers, 231 (20.4%) reported being diagnosed with a chronic health condition, and 198 (17.6%) reported having a child diagnosed with a chronic condition. Among the 231 caregivers with a chronic health condition, 69 (29.9%) also had a child with a chronic health condition . shows the perspectives of caregivers and youth, stratified by caregivers with or without a chronic disease and caregivers with or without a child with a chronic disease, on: (1) knowledge about how and where PHI is stored and who has access to it; (2) trust in health care providers (HCPs), governments, and organizations (ie, hospitals) in keeping PHI secure; (3) willingness to share PHI for not-for-profit health research; and (4) the value and use of a digital platform as described in the survey (Methods section). Caregivers and youth had a similar understanding of where PHI is stored and who can access it, as well as similar trust that HCPs, governments, and organizations implement regulations to keep PHI secure. Compared with caregivers, more youth were willing to share their PHI for research and on a digital platform for clinical care. A larger proportion of caregivers with a chronic disease or a child with a chronic disease reported they understood who could access their child’s PHI compared with those without a child with a chronic disease. Further, among caregivers, having a child with a chronic disease resulted in greater trust that PHI is secure, more willingness to share data for research, and greater agreement that storing their child’s PHI on a digital platform would be a positive change. When asked about their level of concern regarding web data privacy and security, almost all caregivers (1039/1128, 92.11%), youth (160/173, 92.5%), caregivers with a chronic disease (218/231, 94.4%), and caregivers of a child with a chronic disease (181/198, 91.4%) reported that they were very or somewhat concerned. Caregivers living with a chronic disease represented the respondent group with the highest level of concern, with 76.2% (176/231) reporting being very concerned . Among respondents’ selection of their top 3 choices of security processes that would make digital platforms more trustable, more than half of all caregivers indicated that features, such as multifactor authentication (539/1128, 47.78%) and notification of account changes and activity (504/1128, 44.68%) would improve their trust in a digital platform. These were closely followed by safety features such as using a trusted sign-in partner, for example, a banking or government services account (414/1128, 36.7%) and strong minimum password strength requirements (412/1128, 36.52%). The same 4 security features were most important for the youth . After being provided with a description of a digital platform (see above in Web-Based Survey Development), respondents were asked if they would be likely , unlikely, or undecided to use this platform. More youth (87/173, 50.3%) than caregivers (465/1128, 41.22%) stated they were likely to use the platform; however, a sizable portion of caregivers (407/1128, 36.08%) and youth (63/173, 36.4%) were undecided . A higher proportion of caregivers with a chronic condition (130/231, 56.3%) or who have a child with a chronic condition (130/198, 65.7%) responded that they were likely to use the described platform. Most caregivers were comfortable sharing their child’s PHI on a digital platform, including demographics (694/1128, 61.52%), contact information (639/1128, 56.64%), laboratory test results (770/1128, 68.26%), diagnoses (758/1128, 67.19%), medications, procedures and treatments (780/1128, 69.15%), medical imaging (777/1128, 68.88%), a list of their child’s HCPs (787/1128, 69.77%), health habits (such as physical activity and sleep habits; 788/1128, 69.85%), data from applications (751/1128, 66.58%), data from health devices (759/1128, 67.29%), infant feeding habits (760/1128, 67.38%), mental or emotional health (677/1128, 60.02%), immunization records (835/1128, 74.02%), family medical history (746/1128, 66.13%), dental health (850/1128, 75.35%), and allergies (838/1128, 74.29%). Overall, for all types of PHI, more caregivers with a chronic condition or caregivers of a child with a chronic condition were comfortable sharing their child’s PHI on a digital platform . shows respondents’ perspectives on how helpful a digital platform would be for children and youth, caregivers, HCPs, and researchers. outlines the results of ordinal regression analysis for 3 key questions related to trust, willingness to share data for research, and the value of a digital platform. Trust (Question: “In general, what is your level of concern regarding data privacy and security issues when you are engaging in online activity?”; OR<1=lower level of concern.) Caregivers living in suburban (OR 0.72, 95% CI 0.56-0.92) or rural areas (OR 0.66, 95% CI 0.46-0.95) were less likely to report a concern about web data privacy and security, compared with caregivers living in urban areas. In addition, those who completed an undergraduate degree (OR 1.82, 95% CI 1.3-2.55) and graduate degree (OR 2.5, 95% CI 1.68-3.73) compared with those who only completed secondary or trade school had higher odds of reporting concern regarding data privacy and security. Caregivers living with a chronic disease had higher odds of reporting concern (OR 1.81, 95% CI 1.35-2.44) than caregivers without a chronic disease. Sharing Data (Question: “I’m willing to share some of my child’s/children’s health information confidentially if it helps create progress in nonprofit health research.”; OR<1=higher level of willingness.) Compared with no chronic disease, caregivers living with a chronic disease (OR 0.71, 95% CI 0.53-0.97) or caring for a child with a chronic disease (OR 0.51, 95% CI 0.34-0.77) were more likely to be willing to share PHI for not-for-profit research. Value of a Digital Platform (Question: what is the likelihood that you would use a digital platform [as described in the survey]?; OR<1=more likely to use.) Compared with caregivers aged between 36 to 50 years, those aged between 18 to 35 years were more likely (OR 0.63, 95% CI 0.45-0.89), and those aged between 51 to 65 years (OR 1.64, 95% CI 1.25-2.17), and >65 years (OR 2.39, 95% CI 1.39-4.10) were less likely to use the described digital health platform. Compared with respondents located in Canada, those in the United States were more likely to use a digital platform (OR 0.67, 95% CI 0.53-0.85). Moreover, respondents living in suburban (OR 1.57, 95% CI 1.23-2.01) and rural (OR 1.58, 95% CI 1.11-2.26) areas were less likely to use a digital health platform when compared with those living in urban areas. Respondents living with a chronic health condition (OR 0.63, 95% CI 0.47-0.84) and those who have a child with a chronic condition (OR 0.34, 95% CI 0.23-0.5) were more likely to use such a platform. Qualitative Bulletin Board: Perspectives on Digital Security, Sharing of PHI, and the Value of a Digital Platform Of the 40 caregivers who expressed interest in participating, 23 (58%) caregivers completed the web-based bulletin board discussion group process. Among the 23 participants, 11 (48%) were caregivers of a child with a chronic disease and 12 (52%) were caregivers with a healthy child. The most common theme raised in the web-based discussions related to digital security concerns and the fear of a data breach being somehow connected to their child. One subtheme was participants’ worry that information they share about their child could be linked to their child in a way that might resurface in the future and impact job prospects or their ability to receive insurance. A second subtheme was parental concern that the information would be used for financial profit and be sold to third parties rather than be used altruistically. As stated during one web-based discussion: [I am] not sure who would be really looking at this info...is it just the doctor that you’re dealing with...or can it be looked at by the receptionist that’s working at the office and gets ahold of your computer file and can get info on you? Another participant stated, “I would worry about data being secure or sold to third parties.” A second major theme was caregivers’ recognition of the benefits of digital health platforms. Caregivers were generally open to sharing information and appeared amenable to using the digital platform. Caregivers thought the digital platform would offer several benefits; subthemes of identified potential benefits included: (1) being able to access and share information more easily (test results); (2) saving time and effort coordinating their child’s health care; (3) being able to set or receive alerts and reminders for appointments, results, or action; (4) faster access to medical consultations or support (web-based or in person); (5) access to additional resources they might not be aware of; and (6) benefit to the child as they might be more comfortable interacting with the health care team through the digital platform rather than in-person. As stated by one participant: This is a great idea. The healthcare industry really needs to move into the 21st century. All people should have access to their own health records. As long as it is all secure, I am ok with it being online. I love the idea, especially in this COVID time, meeting my doctor virtually when it makes sense. A third major theme was the trustworthiness of individuals and institutions involved with digital health platforms. One key subtheme was the importance of a trusted source of information when learning about the digital health platform. Web-based discussion board participants noted that they would prefer to be told about the digital platform by a trusted source (such as their physician or clinician) with the benefits, ease of use, and security information clearly outlined in any materials provided to them. When asked if they would seek out digital services like applications if their child were diagnosed with a chronic health condition, most stated they would talk to a clinician first as they place much trust in their physician. When asked about the digital platform described in the survey specifically, most reported that they would use it if it were recommended to them by their physician. Yet, if their physician did not recommend it, they might consider using it if other trusted sources (community groups, friends and family, the media) gave it positive feedback. They also might search on the web; however, a recommendation from a trusted source would make them more receptive to trying a new digital platform. When asked if their physician recommended this digital platform, one participant stated: I would be more likely if they recommended it, yes. I would assume they have been educated on the positive aspects of the app and can see how this would benefit the child. A second subtheme was trust in the recipients with whom they might share their health data. When asked with whom they were most comfortable sharing their child’s PHI, most cited their family physician or other health professionals. Respondents were also willing to share PHI with hospitals, Canadian research institutions, universities, specialists, their child’s school, and government agencies (eg, Health Canada). The fourth theme generated in the web-based discussions was the tension between the barriers and benefits of additional new digital accounts and technology. Over half of the web-based discussion group participants (13/23, 57%) reported that they would find it overwhelming to keep track of another digital account. Some were concerned that there might be a steep learning curve or that using the platform might be challenging. A few participants explained, “I don’t need any more technology in my life.” However, other participants disagreed and stated that once they were over the initial learning curve, adopting this digital platform would allow their child to receive better care coordination, making it worthwhile. They explained that it was easy for the password to be saved on their phone, so they were not concerned about additional tasks required by the digital platform. As explained by one participant: I have a lot of digital accounts, I feel it is the way of the world. I don’t think one more would be an issue. Plus, there is a lot of information here, I feel like this is a one-stop shop for all our healthcare-related info. If I had to have multiple healthcare accounts, I would find it overwhelming, but since everything health-related seems to be in one spot, it is quite handy. Overview Out of 1128 caregivers, 685 (60.7%) responded to all questions, with a slightly greater number of caregivers being located in Canada (n=603, 53.5%) versus the United States (n=522, 46.2%). Among 173 youth, 129 (74.5%) responded to all questions. All the youth were from Canada. Among the 1128 caregivers, 231 (20.4%) reported being diagnosed with a chronic health condition, and 198 (17.6%) reported having a child diagnosed with a chronic condition. Among the 231 caregivers with a chronic health condition, 69 (29.9%) also had a child with a chronic health condition . shows the perspectives of caregivers and youth, stratified by caregivers with or without a chronic disease and caregivers with or without a child with a chronic disease, on: (1) knowledge about how and where PHI is stored and who has access to it; (2) trust in health care providers (HCPs), governments, and organizations (ie, hospitals) in keeping PHI secure; (3) willingness to share PHI for not-for-profit health research; and (4) the value and use of a digital platform as described in the survey (Methods section). Caregivers and youth had a similar understanding of where PHI is stored and who can access it, as well as similar trust that HCPs, governments, and organizations implement regulations to keep PHI secure. Compared with caregivers, more youth were willing to share their PHI for research and on a digital platform for clinical care. A larger proportion of caregivers with a chronic disease or a child with a chronic disease reported they understood who could access their child’s PHI compared with those without a child with a chronic disease. Further, among caregivers, having a child with a chronic disease resulted in greater trust that PHI is secure, more willingness to share data for research, and greater agreement that storing their child’s PHI on a digital platform would be a positive change. When asked about their level of concern regarding web data privacy and security, almost all caregivers (1039/1128, 92.11%), youth (160/173, 92.5%), caregivers with a chronic disease (218/231, 94.4%), and caregivers of a child with a chronic disease (181/198, 91.4%) reported that they were very or somewhat concerned. Caregivers living with a chronic disease represented the respondent group with the highest level of concern, with 76.2% (176/231) reporting being very concerned . Among respondents’ selection of their top 3 choices of security processes that would make digital platforms more trustable, more than half of all caregivers indicated that features, such as multifactor authentication (539/1128, 47.78%) and notification of account changes and activity (504/1128, 44.68%) would improve their trust in a digital platform. These were closely followed by safety features such as using a trusted sign-in partner, for example, a banking or government services account (414/1128, 36.7%) and strong minimum password strength requirements (412/1128, 36.52%). The same 4 security features were most important for the youth . After being provided with a description of a digital platform (see above in Web-Based Survey Development), respondents were asked if they would be likely , unlikely, or undecided to use this platform. More youth (87/173, 50.3%) than caregivers (465/1128, 41.22%) stated they were likely to use the platform; however, a sizable portion of caregivers (407/1128, 36.08%) and youth (63/173, 36.4%) were undecided . A higher proportion of caregivers with a chronic condition (130/231, 56.3%) or who have a child with a chronic condition (130/198, 65.7%) responded that they were likely to use the described platform. Most caregivers were comfortable sharing their child’s PHI on a digital platform, including demographics (694/1128, 61.52%), contact information (639/1128, 56.64%), laboratory test results (770/1128, 68.26%), diagnoses (758/1128, 67.19%), medications, procedures and treatments (780/1128, 69.15%), medical imaging (777/1128, 68.88%), a list of their child’s HCPs (787/1128, 69.77%), health habits (such as physical activity and sleep habits; 788/1128, 69.85%), data from applications (751/1128, 66.58%), data from health devices (759/1128, 67.29%), infant feeding habits (760/1128, 67.38%), mental or emotional health (677/1128, 60.02%), immunization records (835/1128, 74.02%), family medical history (746/1128, 66.13%), dental health (850/1128, 75.35%), and allergies (838/1128, 74.29%). Overall, for all types of PHI, more caregivers with a chronic condition or caregivers of a child with a chronic condition were comfortable sharing their child’s PHI on a digital platform . shows respondents’ perspectives on how helpful a digital platform would be for children and youth, caregivers, HCPs, and researchers. outlines the results of ordinal regression analysis for 3 key questions related to trust, willingness to share data for research, and the value of a digital platform. Trust (Question: “In general, what is your level of concern regarding data privacy and security issues when you are engaging in online activity?”; OR<1=lower level of concern.) Caregivers living in suburban (OR 0.72, 95% CI 0.56-0.92) or rural areas (OR 0.66, 95% CI 0.46-0.95) were less likely to report a concern about web data privacy and security, compared with caregivers living in urban areas. In addition, those who completed an undergraduate degree (OR 1.82, 95% CI 1.3-2.55) and graduate degree (OR 2.5, 95% CI 1.68-3.73) compared with those who only completed secondary or trade school had higher odds of reporting concern regarding data privacy and security. Caregivers living with a chronic disease had higher odds of reporting concern (OR 1.81, 95% CI 1.35-2.44) than caregivers without a chronic disease. Sharing Data (Question: “I’m willing to share some of my child’s/children’s health information confidentially if it helps create progress in nonprofit health research.”; OR<1=higher level of willingness.) Compared with no chronic disease, caregivers living with a chronic disease (OR 0.71, 95% CI 0.53-0.97) or caring for a child with a chronic disease (OR 0.51, 95% CI 0.34-0.77) were more likely to be willing to share PHI for not-for-profit research. Value of a Digital Platform (Question: what is the likelihood that you would use a digital platform [as described in the survey]?; OR<1=more likely to use.) Compared with caregivers aged between 36 to 50 years, those aged between 18 to 35 years were more likely (OR 0.63, 95% CI 0.45-0.89), and those aged between 51 to 65 years (OR 1.64, 95% CI 1.25-2.17), and >65 years (OR 2.39, 95% CI 1.39-4.10) were less likely to use the described digital health platform. Compared with respondents located in Canada, those in the United States were more likely to use a digital platform (OR 0.67, 95% CI 0.53-0.85). Moreover, respondents living in suburban (OR 1.57, 95% CI 1.23-2.01) and rural (OR 1.58, 95% CI 1.11-2.26) areas were less likely to use a digital health platform when compared with those living in urban areas. Respondents living with a chronic health condition (OR 0.63, 95% CI 0.47-0.84) and those who have a child with a chronic condition (OR 0.34, 95% CI 0.23-0.5) were more likely to use such a platform. Out of 1128 caregivers, 685 (60.7%) responded to all questions, with a slightly greater number of caregivers being located in Canada (n=603, 53.5%) versus the United States (n=522, 46.2%). Among 173 youth, 129 (74.5%) responded to all questions. All the youth were from Canada. Among the 1128 caregivers, 231 (20.4%) reported being diagnosed with a chronic health condition, and 198 (17.6%) reported having a child diagnosed with a chronic condition. Among the 231 caregivers with a chronic health condition, 69 (29.9%) also had a child with a chronic health condition . shows the perspectives of caregivers and youth, stratified by caregivers with or without a chronic disease and caregivers with or without a child with a chronic disease, on: (1) knowledge about how and where PHI is stored and who has access to it; (2) trust in health care providers (HCPs), governments, and organizations (ie, hospitals) in keeping PHI secure; (3) willingness to share PHI for not-for-profit health research; and (4) the value and use of a digital platform as described in the survey (Methods section). Caregivers and youth had a similar understanding of where PHI is stored and who can access it, as well as similar trust that HCPs, governments, and organizations implement regulations to keep PHI secure. Compared with caregivers, more youth were willing to share their PHI for research and on a digital platform for clinical care. A larger proportion of caregivers with a chronic disease or a child with a chronic disease reported they understood who could access their child’s PHI compared with those without a child with a chronic disease. Further, among caregivers, having a child with a chronic disease resulted in greater trust that PHI is secure, more willingness to share data for research, and greater agreement that storing their child’s PHI on a digital platform would be a positive change. When asked about their level of concern regarding web data privacy and security, almost all caregivers (1039/1128, 92.11%), youth (160/173, 92.5%), caregivers with a chronic disease (218/231, 94.4%), and caregivers of a child with a chronic disease (181/198, 91.4%) reported that they were very or somewhat concerned. Caregivers living with a chronic disease represented the respondent group with the highest level of concern, with 76.2% (176/231) reporting being very concerned . Among respondents’ selection of their top 3 choices of security processes that would make digital platforms more trustable, more than half of all caregivers indicated that features, such as multifactor authentication (539/1128, 47.78%) and notification of account changes and activity (504/1128, 44.68%) would improve their trust in a digital platform. These were closely followed by safety features such as using a trusted sign-in partner, for example, a banking or government services account (414/1128, 36.7%) and strong minimum password strength requirements (412/1128, 36.52%). The same 4 security features were most important for the youth . After being provided with a description of a digital platform (see above in Web-Based Survey Development), respondents were asked if they would be likely , unlikely, or undecided to use this platform. More youth (87/173, 50.3%) than caregivers (465/1128, 41.22%) stated they were likely to use the platform; however, a sizable portion of caregivers (407/1128, 36.08%) and youth (63/173, 36.4%) were undecided . A higher proportion of caregivers with a chronic condition (130/231, 56.3%) or who have a child with a chronic condition (130/198, 65.7%) responded that they were likely to use the described platform. Most caregivers were comfortable sharing their child’s PHI on a digital platform, including demographics (694/1128, 61.52%), contact information (639/1128, 56.64%), laboratory test results (770/1128, 68.26%), diagnoses (758/1128, 67.19%), medications, procedures and treatments (780/1128, 69.15%), medical imaging (777/1128, 68.88%), a list of their child’s HCPs (787/1128, 69.77%), health habits (such as physical activity and sleep habits; 788/1128, 69.85%), data from applications (751/1128, 66.58%), data from health devices (759/1128, 67.29%), infant feeding habits (760/1128, 67.38%), mental or emotional health (677/1128, 60.02%), immunization records (835/1128, 74.02%), family medical history (746/1128, 66.13%), dental health (850/1128, 75.35%), and allergies (838/1128, 74.29%). Overall, for all types of PHI, more caregivers with a chronic condition or caregivers of a child with a chronic condition were comfortable sharing their child’s PHI on a digital platform . shows respondents’ perspectives on how helpful a digital platform would be for children and youth, caregivers, HCPs, and researchers. outlines the results of ordinal regression analysis for 3 key questions related to trust, willingness to share data for research, and the value of a digital platform. (Question: “In general, what is your level of concern regarding data privacy and security issues when you are engaging in online activity?”; OR<1=lower level of concern.) Caregivers living in suburban (OR 0.72, 95% CI 0.56-0.92) or rural areas (OR 0.66, 95% CI 0.46-0.95) were less likely to report a concern about web data privacy and security, compared with caregivers living in urban areas. In addition, those who completed an undergraduate degree (OR 1.82, 95% CI 1.3-2.55) and graduate degree (OR 2.5, 95% CI 1.68-3.73) compared with those who only completed secondary or trade school had higher odds of reporting concern regarding data privacy and security. Caregivers living with a chronic disease had higher odds of reporting concern (OR 1.81, 95% CI 1.35-2.44) than caregivers without a chronic disease. (Question: “I’m willing to share some of my child’s/children’s health information confidentially if it helps create progress in nonprofit health research.”; OR<1=higher level of willingness.) Compared with no chronic disease, caregivers living with a chronic disease (OR 0.71, 95% CI 0.53-0.97) or caring for a child with a chronic disease (OR 0.51, 95% CI 0.34-0.77) were more likely to be willing to share PHI for not-for-profit research. (Question: what is the likelihood that you would use a digital platform [as described in the survey]?; OR<1=more likely to use.) Compared with caregivers aged between 36 to 50 years, those aged between 18 to 35 years were more likely (OR 0.63, 95% CI 0.45-0.89), and those aged between 51 to 65 years (OR 1.64, 95% CI 1.25-2.17), and >65 years (OR 2.39, 95% CI 1.39-4.10) were less likely to use the described digital health platform. Compared with respondents located in Canada, those in the United States were more likely to use a digital platform (OR 0.67, 95% CI 0.53-0.85). Moreover, respondents living in suburban (OR 1.57, 95% CI 1.23-2.01) and rural (OR 1.58, 95% CI 1.11-2.26) areas were less likely to use a digital health platform when compared with those living in urban areas. Respondents living with a chronic health condition (OR 0.63, 95% CI 0.47-0.84) and those who have a child with a chronic condition (OR 0.34, 95% CI 0.23-0.5) were more likely to use such a platform. Of the 40 caregivers who expressed interest in participating, 23 (58%) caregivers completed the web-based bulletin board discussion group process. Among the 23 participants, 11 (48%) were caregivers of a child with a chronic disease and 12 (52%) were caregivers with a healthy child. The most common theme raised in the web-based discussions related to digital security concerns and the fear of a data breach being somehow connected to their child. One subtheme was participants’ worry that information they share about their child could be linked to their child in a way that might resurface in the future and impact job prospects or their ability to receive insurance. A second subtheme was parental concern that the information would be used for financial profit and be sold to third parties rather than be used altruistically. As stated during one web-based discussion: [I am] not sure who would be really looking at this info...is it just the doctor that you’re dealing with...or can it be looked at by the receptionist that’s working at the office and gets ahold of your computer file and can get info on you? Another participant stated, “I would worry about data being secure or sold to third parties.” A second major theme was caregivers’ recognition of the benefits of digital health platforms. Caregivers were generally open to sharing information and appeared amenable to using the digital platform. Caregivers thought the digital platform would offer several benefits; subthemes of identified potential benefits included: (1) being able to access and share information more easily (test results); (2) saving time and effort coordinating their child’s health care; (3) being able to set or receive alerts and reminders for appointments, results, or action; (4) faster access to medical consultations or support (web-based or in person); (5) access to additional resources they might not be aware of; and (6) benefit to the child as they might be more comfortable interacting with the health care team through the digital platform rather than in-person. As stated by one participant: This is a great idea. The healthcare industry really needs to move into the 21st century. All people should have access to their own health records. As long as it is all secure, I am ok with it being online. I love the idea, especially in this COVID time, meeting my doctor virtually when it makes sense. A third major theme was the trustworthiness of individuals and institutions involved with digital health platforms. One key subtheme was the importance of a trusted source of information when learning about the digital health platform. Web-based discussion board participants noted that they would prefer to be told about the digital platform by a trusted source (such as their physician or clinician) with the benefits, ease of use, and security information clearly outlined in any materials provided to them. When asked if they would seek out digital services like applications if their child were diagnosed with a chronic health condition, most stated they would talk to a clinician first as they place much trust in their physician. When asked about the digital platform described in the survey specifically, most reported that they would use it if it were recommended to them by their physician. Yet, if their physician did not recommend it, they might consider using it if other trusted sources (community groups, friends and family, the media) gave it positive feedback. They also might search on the web; however, a recommendation from a trusted source would make them more receptive to trying a new digital platform. When asked if their physician recommended this digital platform, one participant stated: I would be more likely if they recommended it, yes. I would assume they have been educated on the positive aspects of the app and can see how this would benefit the child. A second subtheme was trust in the recipients with whom they might share their health data. When asked with whom they were most comfortable sharing their child’s PHI, most cited their family physician or other health professionals. Respondents were also willing to share PHI with hospitals, Canadian research institutions, universities, specialists, their child’s school, and government agencies (eg, Health Canada). The fourth theme generated in the web-based discussions was the tension between the barriers and benefits of additional new digital accounts and technology. Over half of the web-based discussion group participants (13/23, 57%) reported that they would find it overwhelming to keep track of another digital account. Some were concerned that there might be a steep learning curve or that using the platform might be challenging. A few participants explained, “I don’t need any more technology in my life.” However, other participants disagreed and stated that once they were over the initial learning curve, adopting this digital platform would allow their child to receive better care coordination, making it worthwhile. They explained that it was easy for the password to be saved on their phone, so they were not concerned about additional tasks required by the digital platform. As explained by one participant: I have a lot of digital accounts, I feel it is the way of the world. I don’t think one more would be an issue. Plus, there is a lot of information here, I feel like this is a one-stop shop for all our healthcare-related info. If I had to have multiple healthcare accounts, I would find it overwhelming, but since everything health-related seems to be in one spot, it is quite handy. Principal Findings We identified novel associations between sociodemographic factors and trust in digital health applications to share PHI for clinical care delivery and research among caregivers of children and youth aged 16 to 17 years. We found that living in an urban area (vs rural), having an undergraduate or graduate degree (vs secondary or trade school), and having chronic disease experience (vs no chronic disease experience) increased the level of concern regarding data privacy and security. Interestingly, those with chronic disease experience had the highest level of concern yet compared with those without chronic disease experience, were more willing to share PHI for not-for-profit research and were more likely to use a digital platform for clinical care and chronic disease management. Studies have mostly reported on the perspectives of adults’ willingness to share personal data for research . Our study adds to this growing literature by reporting on the perspectives of caregivers as delegates of their children on digital trust and digital platform use as it relates to sharing PHI for clinical care delivery and research. Comparison to Previous Work Lack of digital trust has notably hindered the widespread adoption of digital health platforms . Health care is increasingly being delivered in digital and virtual environments, making strong digital trust and identity necessary to support 2-way interactions between clinicians and patients, and patient participation in research. To unlock the potential of digital health, it is critical to understand the differing perspectives on digital trust of citizens across different sociodemographic groups when designing digital health applications to optimize usability, feasibility, and adoption . Like our study, other studies assessing perceptions of digital services, such as the internet or social media, have demonstrated that individuals with lower levels of education tended to be less concerned with web privacy , while those with a college or graduate degree were more likely to take additional security measures, such as encrypting emails to protect their privacy . Age also influences the adoption of digital health technologies and people’s willingness to share personal health data for research, although our study did not show this. In a survey study, Nunes Vilaza et al found that, compared with older individuals (>27 years), younger participants (<27 years) were more willing to share personal data for health research. We observed that caregivers with chronic disease experience had the highest level of concern for data privacy and security yet were more likely to use a platform like TrustSphere for clinical care and share personal health data for not-for-profit research. Nunes Vilaza et al found no difference between individuals who self-reported having good, very good, or excellent health compared with those who self-reported fair or poor health in their willingness to share PHI for research. Robbins et al examined health application use stratified by self-reported health and chronic disease status. They found that individuals who self-reported very good or excellent health, compared with poor health, were likelier to download health applications. However, no significant difference in downloading health applications was found when examined based on chronic disease status. The results of our study show that trust in digital health may be connected to one’s familiarity with the health care system and the challenges that patients face in accessing their PHI. Studies show that individuals with chronic illness are more likely to make altruistic choices, including participating in clinical research . In addition, individuals with chronic disease experiences may view the loss of privacy as worthwhile to progress medical research and to benefit others or future patients . Involvement in clinical research and the use of digital health services is often an empowering experience for participants as many gain additional connections to health care professionals or other similar individuals through their participation or access relevant, practical information toward managing their illness . Our survey also demonstrates that individuals are strongly motivated to share PHI if it positively impacts their children’s or others’ lives. Similar to published literature , we found that attention to optimizing web privacy and security is critical when developing a new digital health platform. Establishing digital trust can be achieved via internal technological features as well as external validation of the technology by trusted sources. Internal security features include two-factor authentication and encrypted storage of PHI . Our study also identified notification of account changes or activity, using a trusted sign-in partner, and strong minimum password requirements as important in gaining digital trust. External features that enhance digital trust include recommended use by trusted HCPs or health care organizations or endorsements by friends, family, or other patients or caregivers. In addition, research by Graham et al highlights the significance of collaboration between researchers and patients, underscoring the importance of co-design approaches. This collaborative effort enables teams to gain deeper insights into the needs of end users (eg, security features), facilitating the development of interventions that are more aligned with user preferences and expectations . Through iterative co-design processes, there is a potential for enhanced user engagement and overall user experience, ensuring that the mobile applications effectively address its users’ needs. Strengths and Limitations Our study had both limitations and strengths. First, our survey instrument was not validated and was a voluntary, self-report survey introducing the potential for information, recall, social desirability, and sampling bias. We used Insight West’s panel of volunteer participants to mitigate bias and maximize study validity by accessing a large sample size, yet we acknowledge our results should be interpreted with caution. For example, participants self-reported their chronic disease status and people who have inherent distrust in sharing information on the web might have been less likely to participate in our web-based survey. Despite using volunteer panels, our survey sample was overrepresented by participants living in urban or suburban areas and those with higher levels of education and household income. Furthermore, we could not stratify our findings by ethnicity or race or family structure (two-parent vs single-parent homes). Consequently, the validity of our survey results is challenged, potentially affecting the accuracy of estimations regarding relationships between variables and the generalizability of this study’s findings to the entire North American population. Second, the regression analysis was exploratory in nature with the possibility of residual confounding, and therefore, results should be interpreted with caution. Third, our sample size for the qualitative bulletin boards was small, limiting our qualitative analyses and ability to triangulate quantitative and qualitative data. As such, these findings cannot be viewed as representative and are only presented in this paper to supplement the quantitative survey findings. Finally, our survey data were gathered at the peak of the COVID-19 pandemic, a period marked by a surge in enthusiasm and adoption of digital health tools out of necessity. Therefore, it is essential not to overlook the potential confounding effect of the pandemic on attitudes toward these technologies at the time of data collection. Study strengths included a mixed methods design and a robust sample size of >1000 respondents from across Canada and the United States, strengthening the generalizability of our results. Further, we included the perspectives of youth aged 16 to 17 years on digital trust, which is understudied. Conclusions and Future Directions Our research confirms that there is a willingness among caregivers and youth to use a digital platform like TrustSphere for clinical care delivery and to share their PHI for not-for-profit research. However, perceptions around digital trust vary across sociodemographic groups. Therefore, when designing digital applications, diverse engagement of end users is essential. The results of this study will inform the prioritization of the technological features of TrustSphere’s “digital front door” and have validated the importance of engaging end users (patients or caregivers and health care professionals) as early as possible in the iterative co-design of TrustSphere to optimize the value of the digital tool and ultimately enhance digital trust. Broadly, this study provides much-needed guidance to researchers and technology developers on what it takes to overcome the barrier of digital trust that has, to date, impeded the comprehensive uptake of digital health platforms. Additional research is needed to characterize the digital needs of underrepresented or vulnerable groups to ensure that digital health solutions are accessible to all. We identified novel associations between sociodemographic factors and trust in digital health applications to share PHI for clinical care delivery and research among caregivers of children and youth aged 16 to 17 years. We found that living in an urban area (vs rural), having an undergraduate or graduate degree (vs secondary or trade school), and having chronic disease experience (vs no chronic disease experience) increased the level of concern regarding data privacy and security. Interestingly, those with chronic disease experience had the highest level of concern yet compared with those without chronic disease experience, were more willing to share PHI for not-for-profit research and were more likely to use a digital platform for clinical care and chronic disease management. Studies have mostly reported on the perspectives of adults’ willingness to share personal data for research . Our study adds to this growing literature by reporting on the perspectives of caregivers as delegates of their children on digital trust and digital platform use as it relates to sharing PHI for clinical care delivery and research. Lack of digital trust has notably hindered the widespread adoption of digital health platforms . Health care is increasingly being delivered in digital and virtual environments, making strong digital trust and identity necessary to support 2-way interactions between clinicians and patients, and patient participation in research. To unlock the potential of digital health, it is critical to understand the differing perspectives on digital trust of citizens across different sociodemographic groups when designing digital health applications to optimize usability, feasibility, and adoption . Like our study, other studies assessing perceptions of digital services, such as the internet or social media, have demonstrated that individuals with lower levels of education tended to be less concerned with web privacy , while those with a college or graduate degree were more likely to take additional security measures, such as encrypting emails to protect their privacy . Age also influences the adoption of digital health technologies and people’s willingness to share personal health data for research, although our study did not show this. In a survey study, Nunes Vilaza et al found that, compared with older individuals (>27 years), younger participants (<27 years) were more willing to share personal data for health research. We observed that caregivers with chronic disease experience had the highest level of concern for data privacy and security yet were more likely to use a platform like TrustSphere for clinical care and share personal health data for not-for-profit research. Nunes Vilaza et al found no difference between individuals who self-reported having good, very good, or excellent health compared with those who self-reported fair or poor health in their willingness to share PHI for research. Robbins et al examined health application use stratified by self-reported health and chronic disease status. They found that individuals who self-reported very good or excellent health, compared with poor health, were likelier to download health applications. However, no significant difference in downloading health applications was found when examined based on chronic disease status. The results of our study show that trust in digital health may be connected to one’s familiarity with the health care system and the challenges that patients face in accessing their PHI. Studies show that individuals with chronic illness are more likely to make altruistic choices, including participating in clinical research . In addition, individuals with chronic disease experiences may view the loss of privacy as worthwhile to progress medical research and to benefit others or future patients . Involvement in clinical research and the use of digital health services is often an empowering experience for participants as many gain additional connections to health care professionals or other similar individuals through their participation or access relevant, practical information toward managing their illness . Our survey also demonstrates that individuals are strongly motivated to share PHI if it positively impacts their children’s or others’ lives. Similar to published literature , we found that attention to optimizing web privacy and security is critical when developing a new digital health platform. Establishing digital trust can be achieved via internal technological features as well as external validation of the technology by trusted sources. Internal security features include two-factor authentication and encrypted storage of PHI . Our study also identified notification of account changes or activity, using a trusted sign-in partner, and strong minimum password requirements as important in gaining digital trust. External features that enhance digital trust include recommended use by trusted HCPs or health care organizations or endorsements by friends, family, or other patients or caregivers. In addition, research by Graham et al highlights the significance of collaboration between researchers and patients, underscoring the importance of co-design approaches. This collaborative effort enables teams to gain deeper insights into the needs of end users (eg, security features), facilitating the development of interventions that are more aligned with user preferences and expectations . Through iterative co-design processes, there is a potential for enhanced user engagement and overall user experience, ensuring that the mobile applications effectively address its users’ needs. Our study had both limitations and strengths. First, our survey instrument was not validated and was a voluntary, self-report survey introducing the potential for information, recall, social desirability, and sampling bias. We used Insight West’s panel of volunteer participants to mitigate bias and maximize study validity by accessing a large sample size, yet we acknowledge our results should be interpreted with caution. For example, participants self-reported their chronic disease status and people who have inherent distrust in sharing information on the web might have been less likely to participate in our web-based survey. Despite using volunteer panels, our survey sample was overrepresented by participants living in urban or suburban areas and those with higher levels of education and household income. Furthermore, we could not stratify our findings by ethnicity or race or family structure (two-parent vs single-parent homes). Consequently, the validity of our survey results is challenged, potentially affecting the accuracy of estimations regarding relationships between variables and the generalizability of this study’s findings to the entire North American population. Second, the regression analysis was exploratory in nature with the possibility of residual confounding, and therefore, results should be interpreted with caution. Third, our sample size for the qualitative bulletin boards was small, limiting our qualitative analyses and ability to triangulate quantitative and qualitative data. As such, these findings cannot be viewed as representative and are only presented in this paper to supplement the quantitative survey findings. Finally, our survey data were gathered at the peak of the COVID-19 pandemic, a period marked by a surge in enthusiasm and adoption of digital health tools out of necessity. Therefore, it is essential not to overlook the potential confounding effect of the pandemic on attitudes toward these technologies at the time of data collection. Study strengths included a mixed methods design and a robust sample size of >1000 respondents from across Canada and the United States, strengthening the generalizability of our results. Further, we included the perspectives of youth aged 16 to 17 years on digital trust, which is understudied. Our research confirms that there is a willingness among caregivers and youth to use a digital platform like TrustSphere for clinical care delivery and to share their PHI for not-for-profit research. However, perceptions around digital trust vary across sociodemographic groups. Therefore, when designing digital applications, diverse engagement of end users is essential. The results of this study will inform the prioritization of the technological features of TrustSphere’s “digital front door” and have validated the importance of engaging end users (patients or caregivers and health care professionals) as early as possible in the iterative co-design of TrustSphere to optimize the value of the digital tool and ultimately enhance digital trust. Broadly, this study provides much-needed guidance to researchers and technology developers on what it takes to overcome the barrier of digital trust that has, to date, impeded the comprehensive uptake of digital health platforms. Additional research is needed to characterize the digital needs of underrepresented or vulnerable groups to ensure that digital health solutions are accessible to all. |
Antimicrobial Resistance in the Context of Animal Production and Meat Products in Poland—A Critical Review and Future Perspective | 19520f23-534d-4d2f-be94-a7cda33ca47f | 11676418 | Microbiology[mh] | Antibiotic resistance is one of the most serious threats to public health worldwide . The development and spread of antibiotic resistance are influenced by a variety of factors, including the misuse and overuse of antibiotics in human medicine, environmental contamination, and agricultural practices. Among these, the use of antibiotics in animal husbandry and meat contamination plays a particularly significant role, as resistant bacteria and resistance genes can infiltrate the food chain and impact human health . It is increasingly shown that one of the key factors leading to this phenomenon is the failure to comply with regulations on the use of antibiotics in animal husbandry (in animals raised for food) . Exceeding permitted doses, the inappropriate use of antibiotics for disease prevention in healthy animals, and the use of drugs critical for the treatment of human infections in livestock farming contribute to the selection of resistant bacteria that can infiltrate the food chain and pose a threat to consumers . Despite the introduction of regulations and recommendations to limit antibiotic use in the animal husbandry sector, significant compliance gaps remain, making it difficult to effectively combat the spread of antibiotic resistance . Antibiotic-resistant microorganisms represent one of the most serious challenges of modern medicine and agriculture . The World Health Organization (WHO) has identified antimicrobial resistance (AMR) as a global health and food security threat, emphasizing the need for a “One Health” approach that integrates human, animal, and environmental health strategies . The ability of bacteria to develop resistance to antimicrobial drugs is becoming increasingly widespread, and it affects not only the treatment of infections in humans but also the food sector . Meat and meat products can be a reservoir of antibiotic-resistant pathogens, raising concerns for public health and the effectiveness of treating infectious diseases . As one of the important meat producers in Europe, Poland faces the challenge of monitoring and controlling microbial resistance in the meat sector . Resistance to drugs such as ampicillin, tetracycline, or gentamicin has been observed in numerous bacterial isolates ( Escherichia coli , Staphylococcus spp., Enterococcus spp., Klebsiella pneumoniae , and Citrobacter spp.) . The main factors contributing to the spread of antimicrobial resistance in foods of animal origin, with a particular focus on meat and meat products, are the inappropriate and excessive use of antimicrobials . In practice, about 80% of globally produced antibiotics are used in animal production; however, some that are classified as antibiotics have other purposes in animal production than for treating diseases. Some farmers use subtherapeutic doses of antibiotics to obtain various aims such as animal growth increase, weight gain acceleration, digestion improvement, a higher feed conversion ratio (FCR) and to prevent or reduce disease outbreaks . Residues of veterinary medicines may be present in food of animal origin (ASF) even if their use is fully regulated by law . However, some farmers do not pay sufficient attention to withdrawal periods (WDPs) which increases the risk of spreading antimicrobial resistance in food worldwide, especially in developing countries . The European Medicines Agency (EMA) defines a Maximum Residue Limit (MRL) as an acceptable concentration of residues in food products, and the European Union requires that foods do not contain residues of veterinary medicines above the MRL. The European Union (EU) legally requires that foods like meat, milk, or eggs not contain residue levels of veterinary medicines or biocidal products that could endanger the consumer’s health. Regulation (EC) No 470/2009 of the European Parliament and of the Council defines rules for setting maximum permissible levels (MRLs), measured in milligrams per kilogram for solid products and milligrams per liter for liquids . Antibiotics can accumulate in tissues such as muscles and organs, and their residues act as selection factors that promote the development of resistance in the microorganisms present . Antibiotic residues in muscles post-mortem represent a selection stress that allows only those bacteria with appropriate resistance mechanisms (including enzymatic degradation of the antibiotic, modification of the antibiotic’s target site, or active removal of the substance from the cell) to survive . Such strains not only survive but can also transfer resistance genes to other bacteria through a process of horizontal gene transfer . This requires the interaction of regulatory authorities in monitoring and enforcement and using accurate analytical methods to detect AMR in meat products . The use of antimicrobials in animal husbandry is inevitable . AMR bacteria are frequently detected in meat and meat products, which results from the use of antibiotics during the treatment of sick animals or the preventive treatment of healthy ones . Among pharmaceutical residues, the most common are antibiotics and anthelmintic agents, with antibiotics being the most extensively used in both human and veterinary medicine . Due to health concerns, antibiotics for food preservation have been banned in many countries . This review aims to critically discuss the available literature, based on an expert analysis of the topic of antimicrobial resistance in microorganisms isolated from meat and meat products in Poland, as well as the use of antimicrobial agents in animal production in this country. The review focuses on the main factors contributing to the spreading of antibiotic resistance, such as the excessive and improper use of antimicrobial agents in animal husbandry. It also discusses the legal regulations regarding veterinary drug residues in animal-derived food products, as well as the importance of monitoring and enforcing these regulations to protect public health. The study aims to highlight the risks associated with antimicrobial resistance in meat and meat products and the need for further research and monitoring in this area. 2.1. Importance of Antibiotic Use in Livestock Production Animal husbandry is of considerable importance in agriculture in countries of the European Union. Obtaining the best results from animal husbandry depends primarily on the use of high-quality feed . Ensuring the free circulation of safe and valuable food and feed products is a key element of the internal market, which has a significant impact on consumer health and satisfaction . The use of antibiotics is inextricably linked to obtaining the best results from animal husbandry . Most of the residues of these agents are found in various food products—both of animal and plant origin . Humans can come into contact with antibiotics from two main sources: firstly, from medicines prescribed by doctors, and secondly, from substances used in animal husbandry . These antibiotics can cause serious health problems in humans, which has prompted the introduction of maximum residue limits in food safety legislation. The most important factor contributing to the presence of antibiotics in food is their overuse (including overdosing and ignoring the withdrawal period), as well as the use of antibiotic-contaminated water and improper disposal of animal manure . The use of antibiotics in animal feed for growth promotion became more prominent in the 1950s and 1960s, when various antibiotics with different mechanisms of action were introduced into animal feed. Supplementation of animal feed with antibiotics and antibiotic growth promoters (AGPs) continued until public health concerns arose about off-target drug levels in meat and animal products, increased antimicrobial resistance, intestinal dysbiosis, etc. . Based on the results of studies showing an increase in the number of resistant bacteria under the influence of the cessation of AGP use in various countries, the European Union banned the use of antibiotic growth promoters in all Member States as of 1 January 2006 (Regulation (EC) No 1831/2003) . As of that year, antibiotics in animal husbandry must be used for therapeutic purposes. The cost of producing medicated feed is high, and meeting veterinary requirements is difficult for small- and medium-sized farms, which can lead to non-compliance . Pharmaceutical and veterinary control often lack the tools to prevent illegal trade in veterinary medicines . A monitoring carried out in Poland showed that antibiotics were used in animal farms, especially on turkey and broiler farms. The monitoring results indicated legitimate concerns about the impact on public health now and in the future . 2.2. Challenges of Antibiotic Use The main purpose of antimicrobial use is to control and treat bacterial infections. Antibiotics are administered to symptomatic animals, and the agent dose is adjusted according to their condition. Among farm animals, individual treatment is used for dairy cows and calves . It should be noted that such treatment is ineffective for animals in large flocks, e.g., more than 30,000 poultry or 100 piglets . Antimicrobials are administered to the whole herd for large groups of animals when individual animals show signs of disease. This is known as metaphylaxis . Early treatment of the entire herd reduces the number of sick or dead animals and lowers the use of antibiotics, resulting in lower treatment costs . The prophylactic use of antibiotics is a way of preventing possible infections to which animals are exposed . In this case, agents are administered to individuals or the entire herd when there are no clinical signs of disease, but there is a high probability of infection . Antibiotics are also administered prophylactically at so-called critical moments for the animals, e.g., when mixing animals from different herds, transport, or at the end of lactation of dairy cows . AGPs were another way of using antibiotics in animal production . However, the use of antimicrobial substances in animal husbandry was banned by law in 2006 . The effect of growth promoters was not only to increase weight gain (by 4–28%) but also to improve nutrient absorption, leading to more efficient feed conversion (by 0.8–7.6%) . In addition, there were also reductions in methane and ammonia emissions and more efficient phosphorus utilization . In addition, the use of AGPs reduced the number of sick animals and livestock losses . The use of such agents prevented gastrointestinal infections and maintained the balance of the intestinal microflora . 2.3. Antibiotic Use in Poland The use of antibiotics in livestock production is a globally important issue, and the challenges of monitoring and reducing their use have been repeatedly highlighted in the literature. Pyzik et al. note the lack of global reporting systems for antibiotic use and call for mandatory reporting in every country, not just in Europe. There is also a need to implement monitoring procedures, more effective biosecurity, better governance, and educational efforts targeting groups such as food producers and growers to raise awareness of the risks of antibiotic use. In Poland, as the report of the Supreme Chamber of Control (NIK) indicates, the use of antibiotics in livestock production is widespread, and supervision proves ineffective. For example, in the Lubuskie Voivodeship, as many as 70% of farmers on monitored farms used antibiotics, always justifying their use for therapeutic reasons. However, the NIK points to the lack of full documentation of treatment and weaknesses in the surveillance system, which often relies on breeders’ statements. The scale of the use of antibiotics remains unknown, although data show a 23% increase in their sale between 2011 and 2015. The NIK recommends making reporting mandatory, creating a nationwide database and implementing educational programs for breeders to better control the situation and counter antibiotic resistance. A report by the European Medicines Agency (EMA) shows that although Poland has seen a decline in sales of veterinary antibiotics, their use per kilogram of body weight of production animals still exceeds the EU average. The most-used classes of antibiotics in Poland are tetracyclines, penicillins, and sulfonamides, and the use of critically important antibiotics for human medicine has been limited. Programs being implemented, such as the National Program for the Protection of Antibiotics, aim to rationalize their use and educate farmers and veterinarians. Despite progress, continuing to reduce the use of these agents, especially those critical to human health, remains a challenge. The World Health Organization (WHO) reports that some 27 different antimicrobials are used in animals, including critically important macrolides, ketolides, glycopeptides, quinolones, polymyxins, and cephalosporins (third and fourth generation) for human medicine. The lack of a global surveillance system for the use of antibiotics in the livestock sector is a major gap. In human medicine, the Global Antimicrobial Surveillance System (GLASS) has been implemented to collect and analyze antibiotic resistance data. An analogous system is lacking in the animal sector, although the Scandinavian countries that have implemented advanced monitoring systems can serve as an example of good practice. In low- and middle-income countries, this surveillance is only just developing, with global resistance trends mapped mainly by point prevalence surveys . Studies have shown that between 2000 and 2018, resistance levels increased in chickens and pigs, while stabilizing in cattle, with significant geographic differences . These data underscore the urgent need for global action to reduce antibiotic use in animal husbandry, implement more effective surveillance mechanisms, and promote the rational use of antimicrobials in animal production. Animal husbandry is of considerable importance in agriculture in countries of the European Union. Obtaining the best results from animal husbandry depends primarily on the use of high-quality feed . Ensuring the free circulation of safe and valuable food and feed products is a key element of the internal market, which has a significant impact on consumer health and satisfaction . The use of antibiotics is inextricably linked to obtaining the best results from animal husbandry . Most of the residues of these agents are found in various food products—both of animal and plant origin . Humans can come into contact with antibiotics from two main sources: firstly, from medicines prescribed by doctors, and secondly, from substances used in animal husbandry . These antibiotics can cause serious health problems in humans, which has prompted the introduction of maximum residue limits in food safety legislation. The most important factor contributing to the presence of antibiotics in food is their overuse (including overdosing and ignoring the withdrawal period), as well as the use of antibiotic-contaminated water and improper disposal of animal manure . The use of antibiotics in animal feed for growth promotion became more prominent in the 1950s and 1960s, when various antibiotics with different mechanisms of action were introduced into animal feed. Supplementation of animal feed with antibiotics and antibiotic growth promoters (AGPs) continued until public health concerns arose about off-target drug levels in meat and animal products, increased antimicrobial resistance, intestinal dysbiosis, etc. . Based on the results of studies showing an increase in the number of resistant bacteria under the influence of the cessation of AGP use in various countries, the European Union banned the use of antibiotic growth promoters in all Member States as of 1 January 2006 (Regulation (EC) No 1831/2003) . As of that year, antibiotics in animal husbandry must be used for therapeutic purposes. The cost of producing medicated feed is high, and meeting veterinary requirements is difficult for small- and medium-sized farms, which can lead to non-compliance . Pharmaceutical and veterinary control often lack the tools to prevent illegal trade in veterinary medicines . A monitoring carried out in Poland showed that antibiotics were used in animal farms, especially on turkey and broiler farms. The monitoring results indicated legitimate concerns about the impact on public health now and in the future . The main purpose of antimicrobial use is to control and treat bacterial infections. Antibiotics are administered to symptomatic animals, and the agent dose is adjusted according to their condition. Among farm animals, individual treatment is used for dairy cows and calves . It should be noted that such treatment is ineffective for animals in large flocks, e.g., more than 30,000 poultry or 100 piglets . Antimicrobials are administered to the whole herd for large groups of animals when individual animals show signs of disease. This is known as metaphylaxis . Early treatment of the entire herd reduces the number of sick or dead animals and lowers the use of antibiotics, resulting in lower treatment costs . The prophylactic use of antibiotics is a way of preventing possible infections to which animals are exposed . In this case, agents are administered to individuals or the entire herd when there are no clinical signs of disease, but there is a high probability of infection . Antibiotics are also administered prophylactically at so-called critical moments for the animals, e.g., when mixing animals from different herds, transport, or at the end of lactation of dairy cows . AGPs were another way of using antibiotics in animal production . However, the use of antimicrobial substances in animal husbandry was banned by law in 2006 . The effect of growth promoters was not only to increase weight gain (by 4–28%) but also to improve nutrient absorption, leading to more efficient feed conversion (by 0.8–7.6%) . In addition, there were also reductions in methane and ammonia emissions and more efficient phosphorus utilization . In addition, the use of AGPs reduced the number of sick animals and livestock losses . The use of such agents prevented gastrointestinal infections and maintained the balance of the intestinal microflora . The use of antibiotics in livestock production is a globally important issue, and the challenges of monitoring and reducing their use have been repeatedly highlighted in the literature. Pyzik et al. note the lack of global reporting systems for antibiotic use and call for mandatory reporting in every country, not just in Europe. There is also a need to implement monitoring procedures, more effective biosecurity, better governance, and educational efforts targeting groups such as food producers and growers to raise awareness of the risks of antibiotic use. In Poland, as the report of the Supreme Chamber of Control (NIK) indicates, the use of antibiotics in livestock production is widespread, and supervision proves ineffective. For example, in the Lubuskie Voivodeship, as many as 70% of farmers on monitored farms used antibiotics, always justifying their use for therapeutic reasons. However, the NIK points to the lack of full documentation of treatment and weaknesses in the surveillance system, which often relies on breeders’ statements. The scale of the use of antibiotics remains unknown, although data show a 23% increase in their sale between 2011 and 2015. The NIK recommends making reporting mandatory, creating a nationwide database and implementing educational programs for breeders to better control the situation and counter antibiotic resistance. A report by the European Medicines Agency (EMA) shows that although Poland has seen a decline in sales of veterinary antibiotics, their use per kilogram of body weight of production animals still exceeds the EU average. The most-used classes of antibiotics in Poland are tetracyclines, penicillins, and sulfonamides, and the use of critically important antibiotics for human medicine has been limited. Programs being implemented, such as the National Program for the Protection of Antibiotics, aim to rationalize their use and educate farmers and veterinarians. Despite progress, continuing to reduce the use of these agents, especially those critical to human health, remains a challenge. The World Health Organization (WHO) reports that some 27 different antimicrobials are used in animals, including critically important macrolides, ketolides, glycopeptides, quinolones, polymyxins, and cephalosporins (third and fourth generation) for human medicine. The lack of a global surveillance system for the use of antibiotics in the livestock sector is a major gap. In human medicine, the Global Antimicrobial Surveillance System (GLASS) has been implemented to collect and analyze antibiotic resistance data. An analogous system is lacking in the animal sector, although the Scandinavian countries that have implemented advanced monitoring systems can serve as an example of good practice. In low- and middle-income countries, this surveillance is only just developing, with global resistance trends mapped mainly by point prevalence surveys . Studies have shown that between 2000 and 2018, resistance levels increased in chickens and pigs, while stabilizing in cattle, with significant geographic differences . These data underscore the urgent need for global action to reduce antibiotic use in animal husbandry, implement more effective surveillance mechanisms, and promote the rational use of antimicrobials in animal production. Modern consumers pay attention to the health-promoting properties of food. Meat and meat products are perceived as a source of protein, vitamins, and minerals . Meat is also a source of bioactive compounds such as L-carnitine, taurine, anserine, carnosine, coenzyme Q10, glutathione, bioactive peptides, isomers of linoleic acid (CLA), creatin, and haem iron . In addition to compounds essential for supporting human health, meat may contain drug residues. They result from the inappropriate use of veterinary medicines and the failure to comply with the withdrawal period . This, in turn, can significantly reduce the quality and safety of meat and meat products, which is a major challenge in the context of producing healthy and safe food . Most raw materials of animal origin undergo heat treatment or other processing methods before being consumed. The purpose of these is, among other things, to increase digestibility, improve sensory properties and ensure food safety—by eliminating pathogens . Heat treatment of meat also reduces the concentration of drug residues through protein denaturation, loss of water and fat, and a change in pH . For example, the concentration of doxycycline in meat decreases during cooking, and the residues are excreted from the muscle with cooking loss . Different food processing techniques affect changes in antibiotic content (degree of reduction) in various ways, which include the type and parameters of processing, the kind of meat, the type of antibiotic, or the initial antibiotic content . Boiling proved to be one of the most effective methods of heat treatment. For poultry boiled at 100 °C for 5 min, the enrofloxacin (ENO) concentration decreased from 746.34 ± 5.62 μg/kg to 237.53 ± 2.13 μg/kg, representing a 68.17% reduction . Similarly, oxytetracycline (OTC) decreased from 824.16 ± 7.20 μg/kg to 383.33 ± 3.70 μg/kg (53.49% reduction), and ciprofloxacin (CIP) dropped from 643.14 ± 6.97 μg/kg to 205.46 ± 9.72 μg/kg, achieving a 68.05% reduction. Prolonged boiling, such as for 15 min, resulted in even greater decreases in antibiotic content. For instance, OTC in pork showed a reduction of 52.69%, with the concentration decreasing to 236.56 ± 7.96 μg/kg . Sulfonamides, including sulfadiazine (SDZ), sulfamethoxazole (SMX), sulfamonomethoxine (SMM), and sulfaquinoxaline (SQ), demonstrated gradual reductions in concentration with extended boiling times. For example, SDZ in poultry boiled at 100 °C for 3 min showed a 40.48% reduction, while a 12 min boiling time resulted in a 60.71% reduction . Roasting was another processing method analyzed. Roasting poultry at 200 °C for 30 min reduced the ENO concentration from 746.34 ± 5.62 μg/kg to 233.23 ± 10.19 μg/kg, corresponding to a 68.75% reduction . Similarly, CIP levels dropped from 643.14 ± 6.97 μg/kg to 200.98 ± 10.02 μg/kg, also achieving a 68.75% reduction. However, roasting at lower temperatures (170 °C) for varying durations was less effective in reducing sulfonamide levels. For instance, roasting for 6 min reduced SQ by 21.66%, while roasting for 12 min achieved a 37.73% reduction. Microwave cooking showed high effectiveness, particularly at higher power levels and longer cooking times. Cooking poultry in a microwave at 900 W for 3 min reduced OTC levels from 824.16 ± 7.20 μg/kg to 227.67 ± 2.10 μg/kg, corresponding to a 72.38% reduction . CIP levels decreased by 55.16%, reaching 288.40 ± 3.23 μg/kg. Shorter microwave times and lower power settings (440 W for 45 s) were less effective but still resulted in notable reductions. For instance, tetracycline (TET) levels in poultry decreased by 59.89%, while in pork, the reduction reached 80.54% . The data suggest a clear correlation between the intensity of microwave processing and the effectiveness of antibiotic reduction. Grilling, despite utilizing high temperatures, was less effective than other methods. For poultry grilled at 8 kW for 2.5 min, ENO levels decreased by only 33.33%, while OTC levels dropped by just 16.67% . Reductions for CIP and doxycycline (DOX) were similarly modest, at approximately 16.66–16.67%. This suggests that the short duration of grilling, combined with high intensity, resulted in less degradation of antibiotic residues compared to longer and more evenly distributed heating processes. The analysis of the data indicates that the effectiveness of antibiotic reduction in meat depends on the processing method, the duration of the process, and the type of antibiotic. Boiling and microwave cooking were the most effective methods, with longer durations and higher intensities achieving reductions of over 70%. Roasting and grilling, despite employing high temperatures, were less effective, particularly for shorter durations. Additionally, studies reveal that while thermal processing reduces antibiotic residues, it may lead to the formation of degradation products with potential health implications. For example, Gratacós-Cubarsí et al. observed that tetracyclines in poultry and pork degrade under heat, forming anhydrotetracyclines, which retain some biological activity. Nguyen et al. highlighted the toxic potential of oxytetracycline degradation products in animal models, and Furusawa and Hanabusa found that cooking significantly reduces sulfonamide levels, though complete elimination remains challenging. These findings emphasize the dual role of food processing in reducing antibiotics and potentially generating bioactive or toxic degradation products, underlining the need for further research to optimize processing techniques and assess their implications for consumer safety. The presence of drug residues in meat might cause a serious problem in the production of fermented meat products since the components of industrial starter cultures for fermented meat products might be susceptible to antibiotic residues. In this case, a fermentation process might be disrupted or altered, which not only results in obtaining meat products with changed sensory properties but also poses a risk to public health. Previous studies by Darwish et al. and Moyane et al. showed that the altered fermentation process caused an outbreak of foodborne illness as pathogens present in the raw material persisted after poor fermentation. According to a study by Kjeldgaarda et al. , it appears that the permitted levels of antibiotics in meat can negatively affect the fermentation process. They showed that bacteria used as starter cultures are susceptible to antibiotic residues, even at levels close to those allowed by law, which can lead to the presence of pathogens in processed sausages. Their findings suggest that such residues could be the cause of disease outbreaks associated with the consumption of fermented meat products, providing an argument for reducing the use of antibiotics in animal husbandry . Studies presented here show that the choice of heat treatment method plays a key role in reducing antibiotic residues in meat products, which is directly relevant to food safety and public health. Antibiotic resistance among pathogenic bacteria increases morbidity and mortality and is therefore a challenge worldwide . Of particular concern is the emergence of multidrug resistance . The scale of antibiotic-resistant bacteria in the environment of animal farms observed worldwide today is a consequence of the widespread use of antibiotics at least a decade earlier . Very often, the same antibiotics that were used in agriculture and veterinary medicine were also used to treat humans. For therapeutic purposes, they should only be administered to animals with a confirmed infection . However, it is common practice to administer antibiotics to the whole herd by giving prophylactic doses of antibiotics in poultry, cattle, and pig farming, which are much higher than those used for therapeutic purposes . The Chief Veterinary Inspectorate has been monitoring the drug resistance of zoonotic bacteria in Poland since 2014, and the results show an increase in the drug resistance of microorganisms. Intensive agriculture has a high level of pollutants emitted into the environment, such as air, soil, surface, and rainwater . The use of manure as a fertilizer carries the risk of environmental contamination by pathogens, antibiotics, and antibiotic-resistant pathogens . summarizes the main causes of antibiotic resistance, such as overuse of antibiotics in agriculture, poor veterinary practices, and environmental pollution. It also outlines the health, economic, and environmental implications of resistance, and emphasizes the importance of regulation and preventive measures such as bioassurance programs, vaccination, and One Health approach initiatives. 4.1. Regulations in Antibiotic Use The European Medicines Agency sets maximum residue limits and requires that food not contain harmful amounts of veterinary medicines. Illegal practices, such as off-label use of approved drugs, also contribute to the problem . The use of antibiotics in veterinary medicine has been uncontrolled, but legislation is now being introduced to regulate the practice . However, it is difficult to assess practice on animal farms in Poland due to inconsistencies between reports of antibiotic use and the surveillance system for these drugs . In Poland, one of the laws regulating medicinal products, including antibiotics, is the Act of 6 September 2001 on Pharmaceutical Law. It defines the use of medicinal products in humans and animals, establishes rules for the production and authorization of medicines, and regulates the conduct of clinical trials . The Act of 11 March 2004 on the protection of animal health and the control of infectious animal diseases imposes an obligation on veterinarians to keep veterinary medical records of the treatment carried out. Regulation (EU) 2019/6 of the European Parliament and of the Council of 11 December 2018 on veterinary medicinal products , repealing Directive 2001/82/EC, defines the use of antimicrobials in the treatment of animal diseases. The provisions of this regulation entered into force on 28 January 2022. It introduces important requirements for medicinal products for use in animals, aiming to improve public health and animal health, and reduce antibiotic resistance. Most notably, it bans the prophylactic use of antimicrobials in healthy animals (except in exceptional cases), places restrictions on the use of antibiotics important for human treatment, and requires detailed monitoring and reporting of their sale and use. It sets stricter conditions for registration and introduces a single authorization system in the EU market to increase the quality, safety, and availability of medicinal products. Only veterinarians can prescribe medicines for animals, limiting independent use by pet owners. The regulation also promotes research into new, safe medicinal products and tightens import rules to ensure they comply with EU standards. All these regulations are part of the European Union’s strategy for health safety and the fight against antimicrobial resistance . However, none of the above legal requirements prohibit the therapeutic use of antimicrobial substances but only restrict their unjustified use . In the European Union, since January 2006, following Regulation No 1831/2003 of the European Parliament and of the Council of 22 August 2003 , the marketing and use of antibiotics as feed additives have been prohibited. In Poland, veterinarians providing veterinary services are responsible for keeping drug circulation records and veterinary documentation, including prescription medicinal products for use in both livestock and pets . Currently, the use of antibiotics for growth promotion in farm animals and poultry is banned throughout the EU. However, this ban has not significantly reduced the use of antimicrobials, and subtherapeutic use has been replaced by metaphylaxis and prophylaxis . 4.2. Implications of Antibiotic Resistance Antibiotic resistance leads to higher rates of morbidity and mortality, particularly because of infections with multidrug-resistant bacteria . These bacteria are more difficult to treat, resulting in longer hospital stays and an increased risk of complications and deaths . Antibiotic resistance in Poland leads to serious health risks. Another problem is global bacterial resistance, which can lead to ineffective standard antibiotic therapies and higher hospital admissions . The costs associated with antibiotic resistance are enormous, both for healthcare systems and the economy. Inappropriate use of antibiotics in Poland, especially in primary care, leads to high treatment costs for infections caused by resistant bacterial strains. Research shows that the overuse of antibiotics in regions with high levels of unemployment and intensive population mobility contributes to increased resistance and economic burden, including prolonged hospitalization and higher treatment expenditure . The costs associated with treating infections caused by resistant bacteria from food are significant . High levels of antibiotic resistance, especially in egg products, affect consumer health, leading to increased healthcare expenditure, including longer hospital stays and the cost of additional diagnostic tests and treatment . In an economic context, bacterial resistance in the agricultural sector in Poland also leads to losses in agricultural production, as animals infected with resistant bacteria require more complex treatment, which increases the cost of breeding . These costs also include losses associated with product recalls and the costs of monitoring and controlling infections in agricultural production . Combating antibiotic resistance in the food production sector is a complex process that requires cooperation at local and national levels. These costs also extend to the agricultural sector, where the use of antibiotics in animal husbandry leads to production losses due to increasing drug resistance in both humans and animals . Research indicates that vaccines can be an economically viable tool in the fight against antibiotic resistance, reducing the number of cases of resistant infections and reducing the overall need for antibiotics . Antibiotic resistance also has a significant impact on the environment. The use of antibiotics in agriculture and animal husbandry leads to contamination of soil and water, which promotes the spread of resistance genes in the environment . Excessive use of antibiotics in animal husbandry and poor waste management lead to antibiotics and resistant bacteria entering the environment, including soil and groundwater . Studies on isolated strains from food products indicate that resistant bacteria can infiltrate the ecosystem through agricultural and industrial waste, increasing the risk of resistance genes spreading in the environment . Antibiotic resistance in Poland, associated with isolated bacteria from food, is a serious health, economic, and environmental threat. Effective measures are needed to reduce the use of antibiotics in food production and to monitor the spread of resistance. 4.3. Strategies to Prevent Antibiotic Resistance There is a need to integrate water, sanitation, and hygiene (WaSH) programs with biosecurity in animal husbandry. This approach can reduce the transmission of antibiotic-resistant bacteria . Biosequestration and improved hygiene in animal husbandry can significantly reduce the risk of exposure to resistant bacteria, protecting both humans and animals . The One Health approach emphasizes the importance of the interdependence between human, animal, and environmental health . The implementation of integrated measures, such as reducing the overuse of antibiotics and improving sanitation and hygiene in animal husbandry, are key actions in the fight against antibiotic resistance . These programs should be combined with better monitoring and surveillance systems to effectively prevent the further spread of resistant bacteria . Intensive animal husbandry in Poland results in the emission of bioaerosols containing antibiotic-resistant bacteria. These bacteria can enter the environment, threatening the health of humans and animals in the vicinity of farms . Action is needed to reduce the spread of antibiotic-resistant bacteria on farms and in the animal food supply chain . In Poland, monitoring and surveillance of the spread of antibiotic-resistant bacteria in the agricultural environment is insufficient . Studies to date show the presence of antibiotic-resistant bacteria on farms in Poland, but data are limited to individual farms and a small number of samples . Larger surveys and more extensive monitoring programs are needed to better assess the scale of the problem . The European Medicines Agency sets maximum residue limits and requires that food not contain harmful amounts of veterinary medicines. Illegal practices, such as off-label use of approved drugs, also contribute to the problem . The use of antibiotics in veterinary medicine has been uncontrolled, but legislation is now being introduced to regulate the practice . However, it is difficult to assess practice on animal farms in Poland due to inconsistencies between reports of antibiotic use and the surveillance system for these drugs . In Poland, one of the laws regulating medicinal products, including antibiotics, is the Act of 6 September 2001 on Pharmaceutical Law. It defines the use of medicinal products in humans and animals, establishes rules for the production and authorization of medicines, and regulates the conduct of clinical trials . The Act of 11 March 2004 on the protection of animal health and the control of infectious animal diseases imposes an obligation on veterinarians to keep veterinary medical records of the treatment carried out. Regulation (EU) 2019/6 of the European Parliament and of the Council of 11 December 2018 on veterinary medicinal products , repealing Directive 2001/82/EC, defines the use of antimicrobials in the treatment of animal diseases. The provisions of this regulation entered into force on 28 January 2022. It introduces important requirements for medicinal products for use in animals, aiming to improve public health and animal health, and reduce antibiotic resistance. Most notably, it bans the prophylactic use of antimicrobials in healthy animals (except in exceptional cases), places restrictions on the use of antibiotics important for human treatment, and requires detailed monitoring and reporting of their sale and use. It sets stricter conditions for registration and introduces a single authorization system in the EU market to increase the quality, safety, and availability of medicinal products. Only veterinarians can prescribe medicines for animals, limiting independent use by pet owners. The regulation also promotes research into new, safe medicinal products and tightens import rules to ensure they comply with EU standards. All these regulations are part of the European Union’s strategy for health safety and the fight against antimicrobial resistance . However, none of the above legal requirements prohibit the therapeutic use of antimicrobial substances but only restrict their unjustified use . In the European Union, since January 2006, following Regulation No 1831/2003 of the European Parliament and of the Council of 22 August 2003 , the marketing and use of antibiotics as feed additives have been prohibited. In Poland, veterinarians providing veterinary services are responsible for keeping drug circulation records and veterinary documentation, including prescription medicinal products for use in both livestock and pets . Currently, the use of antibiotics for growth promotion in farm animals and poultry is banned throughout the EU. However, this ban has not significantly reduced the use of antimicrobials, and subtherapeutic use has been replaced by metaphylaxis and prophylaxis . Antibiotic resistance leads to higher rates of morbidity and mortality, particularly because of infections with multidrug-resistant bacteria . These bacteria are more difficult to treat, resulting in longer hospital stays and an increased risk of complications and deaths . Antibiotic resistance in Poland leads to serious health risks. Another problem is global bacterial resistance, which can lead to ineffective standard antibiotic therapies and higher hospital admissions . The costs associated with antibiotic resistance are enormous, both for healthcare systems and the economy. Inappropriate use of antibiotics in Poland, especially in primary care, leads to high treatment costs for infections caused by resistant bacterial strains. Research shows that the overuse of antibiotics in regions with high levels of unemployment and intensive population mobility contributes to increased resistance and economic burden, including prolonged hospitalization and higher treatment expenditure . The costs associated with treating infections caused by resistant bacteria from food are significant . High levels of antibiotic resistance, especially in egg products, affect consumer health, leading to increased healthcare expenditure, including longer hospital stays and the cost of additional diagnostic tests and treatment . In an economic context, bacterial resistance in the agricultural sector in Poland also leads to losses in agricultural production, as animals infected with resistant bacteria require more complex treatment, which increases the cost of breeding . These costs also include losses associated with product recalls and the costs of monitoring and controlling infections in agricultural production . Combating antibiotic resistance in the food production sector is a complex process that requires cooperation at local and national levels. These costs also extend to the agricultural sector, where the use of antibiotics in animal husbandry leads to production losses due to increasing drug resistance in both humans and animals . Research indicates that vaccines can be an economically viable tool in the fight against antibiotic resistance, reducing the number of cases of resistant infections and reducing the overall need for antibiotics . Antibiotic resistance also has a significant impact on the environment. The use of antibiotics in agriculture and animal husbandry leads to contamination of soil and water, which promotes the spread of resistance genes in the environment . Excessive use of antibiotics in animal husbandry and poor waste management lead to antibiotics and resistant bacteria entering the environment, including soil and groundwater . Studies on isolated strains from food products indicate that resistant bacteria can infiltrate the ecosystem through agricultural and industrial waste, increasing the risk of resistance genes spreading in the environment . Antibiotic resistance in Poland, associated with isolated bacteria from food, is a serious health, economic, and environmental threat. Effective measures are needed to reduce the use of antibiotics in food production and to monitor the spread of resistance. There is a need to integrate water, sanitation, and hygiene (WaSH) programs with biosecurity in animal husbandry. This approach can reduce the transmission of antibiotic-resistant bacteria . Biosequestration and improved hygiene in animal husbandry can significantly reduce the risk of exposure to resistant bacteria, protecting both humans and animals . The One Health approach emphasizes the importance of the interdependence between human, animal, and environmental health . The implementation of integrated measures, such as reducing the overuse of antibiotics and improving sanitation and hygiene in animal husbandry, are key actions in the fight against antibiotic resistance . These programs should be combined with better monitoring and surveillance systems to effectively prevent the further spread of resistant bacteria . Intensive animal husbandry in Poland results in the emission of bioaerosols containing antibiotic-resistant bacteria. These bacteria can enter the environment, threatening the health of humans and animals in the vicinity of farms . Action is needed to reduce the spread of antibiotic-resistant bacteria on farms and in the animal food supply chain . In Poland, monitoring and surveillance of the spread of antibiotic-resistant bacteria in the agricultural environment is insufficient . Studies to date show the presence of antibiotic-resistant bacteria on farms in Poland, but data are limited to individual farms and a small number of samples . Larger surveys and more extensive monitoring programs are needed to better assess the scale of the problem . Bacteria such as Campylobacter spp., Staphylococcus spp., Enterococcus spp., Listeria monocytogenes , and Enterobacterales (including Salmonella spp. and E. coli ) are found in the animal farm environment and are emitted into the air and surface water, which can cause infections in humans and are a source of antibiotic resistance genes . Many bacteria have evolved multiple mechanisms of antibiotic resistance, including the production of inactivating enzymes, blockade of target sites, alteration in cell membrane permeability, and active efflux of antibiotics from the cell . Bacteria may have resistance genes for many different drugs, as well as transport proteins that can actively pump drugs and substances out of the cell into the external environment . presents the occurrence and antimicrobial resistance of microorganisms isolated from meat and meat products in Poland. 5.1. Campylobacter spp. Campylobacter spp. is a major cause of foodborne illness in humans, which results from improper processing or consumption of undercooked poultry meat . For severe or chronic infections caused by Campylobacter spp., treatment with antibiotics (e.g., fluoroquinolones and macrolides) may be necessary, which is problematic because of the uncontrolled use of these drugs in clinical medicine and animal production . Campylobacter spp. is one of the main causes of foodborne gastroenteritis responsible for zoonosis—campylobacteriosis. Campylobacter , especially Campylobacter jejuni and to a lesser extent Campylobacter coli , is one of the leading causes of foodborne infections worldwide . The main source of infection is contaminated poultry meat , and high contamination poses a threat to public health. It is estimated that 50% to 80% of human campylobacteriosis cases are directly linked to poultry meat, particularly Campylobacter jejuni . In recent years, Campylobacter has been increasing in resistance to antibiotics (especially quinolones and macrolides) due to their widespread use in agriculture . Although campylobacteriosis usually resolves spontaneously, macrolides (erythromycin), fluoroquinolones, and tetracyclines are used in severe cases . Since chickens are the main reservoir of Campylobacter , antibiotic resistance in these bacteria isolated from poultry is of serious concern. The use of antimicrobials in animal production, especially in veterinary medicine, may contribute to the buildup of resistance in human isolates, especially to quinolones . The aim of the study by Woźniak-Biel et al. was to identify Campylobacter strains, isolated from turkeys and chickens, using polymerase chain reaction (PCR) and matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) methods, and assess their antibiotic resistance. The results obtained from MALDI-TOF were consistent with those from multiplex PCR. There was 100% resistance to ciprofloxacin in strains from turkeys and chickens, and 58.1% and 78.6% resistance to tetracycline in these groups, respectively. No multidrug-resistant strains were detected, and all ciprofloxacin-resistant strains had a mutation in the gyrA gene at the Thr-86 position. The presence of the tetO gene was present in 71.0% of turkey strains and 100% of chickens, and this gene was also found in five turkey strains and three chickens that were sensitive to tetracycline. The results indicate a high prevalence of Campylobacter strains that are phenotypically and genetically resistant to fluoroquinolones and tetracycline. A study by Maćkiw et al. on the antibiotic resistance of C. jejuni and C. coli strains isolated from food in Poland showed that Campylobacter spp. is often isolated from poultry, which is the main source of human infections with these bacteria. High levels of resistance to fluoroquinolones, including ciprofloxacin, were found, which is in line with trends observed in other European countries. Resistance to tetracyclines was also common, which may be due to the widespread use of these antibiotics in animal husbandry. The tet (O) genes responsible for resistance to tetracyclines and gyrA associated with resistance to fluoroquinolones were identified. Some strains showed resistance to macrolides such as erythromycin, but this was less prevalent compared to fluoroquinolones and tetracyclines. It was also noted that multidrug resistance was relatively common. These results suggest the need to monitor Campylobacter sp. resistance in food to prevent the spread of resistant strains, which can threaten public health. A study by Wieczorek and Osek analyzing the antibiotic resistance of C. jejuni and C. coli strains of poultry carcass samples collected between 2009 and 2013 showed that 54.4% of samples were positive for Campylobacter . Resistance to ciprofloxacin was 81.6%, to tetracycline 56.1%, and only 2.4% of isolates were resistant to erythromycin. In contrast, resistance was higher among C. coli than C. jejuni , and an increase in resistance to ciprofloxacin and tetracycline was noted over the five-year study period. A later study by Wieczorek et al. on the prevalence and antibiotic resistance of Campylobacter strains isolated from chicken carcasses in Poland between 2014 and 2018 reported that 53.4% of samples (in total 2367 samples collected from slaughterhouses across the country) were positive for Campylobacter . Mainly, C. coli (31.2%) and C. jejuni (22.2%) were identified. The strains showed high resistance to ciprofloxacin (93.1%), nalidixic acid (92.3%), and tetracycline (70.9%). Only a small percentage of isolated strains were resistant to erythromycin (4.2%), with C. coli (6.4%) showing more resistance than C. jejuni (1.1%). Multidrug resistance was found in 25.1% of C. coli and 20.6% of C. jejuni strains. The study showed an increase in the percentage of multidrug-resistant strains compared to earlier years, indicating the necessity of taking measures to control Campylobacter at the poultry slaughter stage and restricting the use of antibiotics in poultry production. Rożynek et al. analyzed in detail the emergence of macrolide-resistant Campylobacter strains in poultry meat in Poland and the resistance mechanisms responsible for the problem. Macrolides, such as erythromycin, are key antibiotics used to treat infections caused by these bacteria . The study found a significant number of strains resistant to macrolides, which poses an important therapeutic challenge. The mechanism of resistance to these antibiotics was mainly related to mutations in domain V of the 23S rRNA gene, which encodes the ribosomal subunit responsible for macrolide binding. These mutations, particularly at nucleotide positions 2074 and 2075, lead to a reduced ability of macrolides to inhibit bacterial protein synthesis . Also identified were erm (B) genes that encode methyltransferases, enzymes that modify ribosomes and cause macrolide resistance. In addition, other resistance mechanisms, such as the pumping of antibiotics out of bacterial cells by the efflux pump CmeABC, were also identified as an important factor in the development of resistance. The study also found a link between resistance and intensive antibiotic use in poultry farming, which promotes the selection of resistant strains. The authors emphasize the need to monitor antibiotic resistance and to introduce stricter regulations on the use of macrolides in animal food production to prevent the further spread of resistant strains of Campylobacter spp. Another source of Campylobacter is beef and pork. It was reported that the prevalence of Campylobacter spp. in retail beef products was about 10.0% , whereas its prevalence in beef and pork carcasses was 10.0% and 30.0%, respectively . Antibiotic profiling revealed that Campylobacter isolated from pork and cattle carcasses during the slaughter process in Poland most often showed resistance to quinolones (57.1%) and tetracycline (51.4%) . One strain of C. coli from a pork sample was resistant to three antibiotics simultaneously. This is worrisome given the public health concerns arising from the increasing antibiotic resistance of microorganisms to antimicrobials that are used as first-line drugs in the clinical treatment of campylobacteriosis . As reported by Wieczorek and Osek , 100% of Campylobacter strains isolated from pork and beef carcasses were sensitive to gentamicin and chloramphenicol. Significant differences were found between C. coli and C. jejuni , especially in resistance to streptomycin ( p < 0.001) and tetracycline ( p < 0.05). All C. jejuni isolates were sensitive to streptomycin, while 80.5% and 66.7% of C. coli strains from pigs and cattle, respectively, were resistant. C. coli also showed higher resistance to tetracycline, quinolones (nalidixic acid), and fluoroquinolones (ciprofloxacin). Four C. coli isolates from pig carcasses were resistant to erythromycin. Multidrug resistance was found in 61.4% of strains, with the highest levels of resistance to quinolones, fluoroquinolones, aminoglycosides, and tetracyclines, mainly in C. coli . Campylobacter spp. is also prevalent in geese and poses a potential risk for human campylobacteriosis through the consumption of goose meat. Campylobacter was found in 83.3% of goose cecum samples and 52.5% of neck skin samples from carcasses, with C. jejuni being the predominant species (87.7% of isolates) . The isolates exhibited high levels of antimicrobial resistance, particularly to quinolones (90.8%) and tetracycline (79.8%), while resistance to macrolides was rare (0.6%) . This aligns with findings from other studies showing high resistance of Campylobacter isolates to ciprofloxacin, tetracycline, and nalidixic acid in various bird species . Campylobacter spp. in meat and meat products in Poland indicates the presence of this pathogen in both beef, pork, and poultry, with poultry meat being the main source of human infections. Studies have shown significant levels of antibiotic resistance, especially to quinolones and tetracycline, posing a serious public health challenge. Macrolide resistance, although rarer, is also a problem, especially in C. coli . Campylobacter strains, which have also shown multidrug resistance, underscoring the need for the close monitoring of antibiotic resistance and limiting the use of antibiotics in animal production. The increase in the number of multi-resistant strains in recent years poses an epidemiological threat and calls for action to control Campylobacter at all stages of food production. 5.2. Staphylococcus spp. Antibiotic resistance in staphylococci isolated from meat and meat products has become an important public health problem worldwide . Both coagulase-positive staphylococci (CPS) and coagulase-negative staphylococci (CNS) have been found to carry antibiotic-resistant genes, posing a potential threat to consumers . Studies have shown a high prevalence of antibiotic-resistant Staphylococcus species in a variety of meat products, including chicken, beef, and processed meat products . Interestingly, the distribution of antibiotic resistance varies by Staphylococcus species and meat type . The pathogenesis of CNS species depends on the factors required for their commensal lifestyle, and one such factor that increases the importance of these microorganisms in the pathology of mammals and birds is their resistance to numerous antimicrobial agents . Poultry has been identified as one of the most important carriers of foodborne pathogens and antimicrobial resistance genes . A detailed analysis of resistance genes in staphylococci associated with livestock revealed a wide variety of these genes. These mainly include genes known to be commonly present in staphylococci of human and animal origin, such as the beta-lactamase gene blaZ , the methicillin resistance gene mecA , the tetracycline resistance genes tet (K), tet (L), tet (M), and tet (O), macrolysine–lincosamide–estreptogramin B (MLSB) resistance genes erm (A) and erm (B), erythromycin-inducible resistance gene msr A/B, aac (6′) Ie-aph (2″) Ia gene of aminoglycoside-modifying enzymes, and florfenicol/chloramphenicol resistance gene ( cfr ) . Methicillin resistance in Staphylococcus is now a global problem . In CNS, the mechanisms of resistance are like those observed in S. aureus . However, resistance mediated by the mecA gene in CNS is often expressed at lower levels compared to methicillin-resistant S. aureus (MRSA) . This lower expression can complicate its detection, highlighting the need for further studies to understand and address these diagnostic challenges . Pyzik et al. analyzed antibiotic resistance in coagulase-negative staphylococci isolated from poultry in Poland. CNS, despite being less pathogenic than coagulase-positive strains, is becoming a significant health threat due to increasing antibiotic resistance . The study detected numerous resistance genes, including the mecA gene, suggesting the presence of methicillin-resistant strains of coagulase-negative staphylococci (MR-CNS). Also identified were the ermA , ermB , and ermC genes, which confer resistance to macrolides, lincosamides, and streptogramins, limiting the effectiveness of these antibiotic groups in treating infections. The tetK and tetM genes, associated with resistance to tetracyclines, were also commonly present, indicating widespread CNS resistance to these frequently used antibiotics in animal treatment. In addition, the study revealed the presence of blaZ genes encoding beta-lactamases, which leads to the degradation of beta-lactam antibiotics such as penicillins, further limiting the therapeutic options. Also, in a study by Chajęcka-Wierzchowska et al., the pheno- and geno-typical antimicrobial resistance profile of CNS from ready-to-eat cured meat was studied . Mainly, S. epidermidis and S. xylosus were identified. Phenotypic analysis showed that isolates exhibited resistance to FOX, TGC, QD, DA, TET, CN, RD, CIP, W, and SXT, containing the following genes encoding antibiotic resistance in their genome: mec(A) , tet(L) , tet(M) , and tet(K) . Notably, two strains of the S. xylosus species showed simultaneous antibiotic resistance from nine different classes. This species is a component of the cultures used in the production of meat products, so it also becomes reasonable to control the strains used as starter and protective cultures, which have not been regulated for years and are not mandatorily tested for AMR . A study by Krupa et al. analyzed the antibiotic resistance of S. aureus strains isolated from poultry meat in Poland . The study found that a significant percentage of these strains showed resistance to oxacillin, indicating the presence of methicillin-resistant strains of Staphylococcus aureus (MRSA). The poultry meat tested in the study also contained MRSA strains, posing a potential risk to consumers. MRSA strains are a serious public health risk due to limited treatment options for infections caused by them . The study observed genotypic diversity in these strains, suggesting multiple sources of infection and transmission between livestock and humans. Another study by Krupa et al. focused on the population structure and oxacillin resistance in S. aureus strains from pork in southwestern Poland. The study found the presence of antibiotic-resistant S. aureus strains, including methicillin-resistant S. aureus , which exhibit resistance to oxacillin. This resistance is associated with the presence of the mecA gene . Other resistance genes such as erm (encoding macrolide resistance) and tet (encoding tetracycline resistance) were also detected, indicating multidrug resistance in some strains. Phylogenetic analysis revealed a diversity of S. aureus clones. Podkowik et al. analyzed in detail the presence of antibiotic-resistant genes in staphylococci isolated from ready-to-eat meat products such as sausages, hams, and pates. The study revealed the presence of numerous resistance genes, suggesting that these products may harbor pathogens resistant to antibiotic treatment. Particular attention was paid to the mecA gene. In addition, erm genes encoding resistance to macrolides, lincosamides, and streptogramins were detected, further complicating therapy, as these antibiotics are often used to treat staphylococcal infections. Tet genes have also been identified that cause resistance to tetracyclines, a group of antibiotics widely used in veterinary medicine and agriculture, suggesting that the use of these drugs in animal husbandry may contribute to the spread of resistant strains in food . The presence of the blaZ gene, which encodes beta-lactamases, enzymes that degrade beta-lactam antibiotics (such as penicillins), indicates a wide range of resistance, further limiting treatment options for infections. The study underscores that the high prevalence of these genes in ready-to-eat products poses a real threat to public health, as consumption of contaminated foods can lead to infections that are difficult to treat. The presence of antibiotic-resistant staphylococci in meat and meat products is a growing food safety concern. The high prevalence of resistance genes and multidrug-resistant strains highlights the need for improved monitoring systems and stricter regulation of antibiotic use in animal husbandry. These findings highlight the necessity of ongoing surveillance of MRSA and other resistant bacteria in animal products to mitigate the risk of transmission to humans and prevent the spread of resistance in the food chain. Additionally, further research is required to better understand resistance mechanisms, develop effective strategies to control them, and address this complex public health issue in the context of food production and processing. 5.3. Enterococcus spp. Enterococci, which are the natural intestinal flora of mammals, birds, and humans, are often responsible for nosocomial infections such as urinary tract infections, endocarditis, and catheter- and wound-related infections . The most frequently isolated species are Enterococcus faecalis and Enterococcus faecium , whereas Enterococcus gallinarum and Enterococcus casseliflavus appear less frequently . In poultry, enterococci cause, among others, endocarditis and arthritis . The use of antibiotics in human and veterinary medicine promotes the selection of resistant strains, which can transfer resistance genes between different bacteria, posing a risk to human health . In Europe, due to resistance to vancomycin and aminoglycosides, infections caused by enterococci are a serious clinical problem . An example is the use of avoparcin in animal feed, which contributed to the increase in vancomycin resistance before its use was banned in 1997 . Molecular mechanisms of resistance include genes such as vanA , vanB , tetM , or ermB , and biofilm-forming enterococci are particularly difficult to control . Biofilms, which are complex communities of microorganisms, protect bacteria from antibiotics and the immune system, making it difficult to treat infections such as wounds or urinary tract infections . The ability to form a biofilm also increases contamination in the food industry and promotes gene transfer between bacteria . A study by Chajęcka-Wierzechowska et al. analyzed 390 samples of ready-to-eat meat products, of which Enterococcus strains were detected in 74.1%. A total of 302 strains were classified: E. faecalis (48.7%), E. faecium (39.7%), E. casseliflavus (4.3%), E. durans (3.0%), E. hirae (2.6%), and another Enterococcus spp. (1.7%). A high percentage of isolates showed resistance to streptomycin (45.0%), erythromycin (42.7%), fosfomycin (27.2%), rifampicin (19.2%), tetracycline (36.4%), and tigecycline (19.9%). The most frequently detected resistance gene was ant(6′)-Ia (79.6%). Other significant genes were aac(6′)-Ie-aph(2″)-Ia (18.5%), aph(3″)-IIIa (16.6%), and tetracycline resistance genes: tetM (43.7%), tetL (32.1%), and tetK (14.6%). The ermB and ermA genes were found in 33.8% and 18.9% of isolates, respectively, and almost half of the isolates contained the conjugative transposon Tn916/Tn1545. The study revealed that enterococci are widespread in ready-to-eat meat products. Many of the isolated strains show antibiotic resistance and carry resistance genes that pose a potential risk due to their ability to transmit resistance genes to bacteria present in the human body, which may interact with enterococci isolated from food products. Knowledge of antibiotic resistance in food strains outside the E. faecalis and E. faecium species is very limited . The experiments conducted in this study analyzed in detail the antibiotic resistance of strains of species such as E. casseliflavus , E. durans , E. hirae , and E. gallinarum . The results indicate that these species may also harbor resistance genes to several important classes of antibiotics. Ławniczek-Wałczyk et al. analyzed the prevalence of antibiotic-resistant Enterococcus sp. strains in meat and the production environment of meat plants in Poland. Different Enterococcus species were identified, including E. faecalis and E. faecium . These strains showed significant antibiotic resistance, especially to erythromycin, tetracycline, and vancomycin. Resistance to vancomycin is of particular concern because vancomycin is often the drug of last resort in the treatment of infections caused by multidrug-resistant bacteria. Resistance genes such as vanA , vanB (for vancomycin), and ermB (for erythromycin) are commonly present in strains from both environmental and meat samples. A study by Stępień-Pyśniak et al. examined the prevalence and antibiotic resistance patterns of Enterococcus strains isolated from poultry. It focused on E. faecalis and E. faecium , which are common in poultry and known for their antibiotic resistance. The results showed that a significant proportion of isolates exhibited multidrug resistance, particularly to antibiotics frequently used in both veterinary and human medicine. High resistance rates were observed for antibiotics such as erythromycin, tetracycline, and vancomycin, with some strains showing resistance to multiple classes of antibiotics. Woźniak-Biel et al. analyzed the antibiotic resistance of Enterococcus strains isolated from turkeys. In the study, 51 strains from turkeys showed high resistance to tetracycline (94.1%) and erythromycin (76.5%). About 43.1% of the strains were multi-resistant, and 15.7% showed vancomycin resistance, associated with the presence of the vanA gene. A macrolide resistance gene ( ermB ) was also detected in 68.6% of the strains. All isolates showed the ability to form biofilms, which may contribute to their greater resistance and difficulty in treatment. The studies presented the widespread occurrence of antibiotic-resistant Enterococcus strains in meat and meat products, particularly in ready-to-eat foods and poultry. Multiple studies consistently show that E. faecalis and E. faecium are the most frequently isolated species, with significant resistance to antibiotics such as tetracycline, erythromycin, and vancomycin. The research points to the frequent presence of antibiotic-resistant genes like vanA , ermB , tetM , and ermA . In addition to their high resistance levels, these strains often exhibit the ability to form biofilms, further complicating their treatment and increasing the risk of gene transfer between bacteria. Studies conducted in Poland have revealed that both environmental and meat production facilities are affected by the presence of antibiotic-resistant enterococci, particularly those resistant to clinically important antibiotics like vancomycin, which is often a last-resort treatment. This resistance poses a significant threat to public health by facilitating the transmission of resistant strains through the food chain, from animals to humans. 5.4. Listeria monocytogenes L. monocytogenes , a foodborne pathogen that causes listeriosis zoonosis, is increasingly being detected in meat and meat products, raising concerns about food safety and public health. Studies have shown different rates of L. monocytogenes in different meats, with chicken, pork, and ready-to-eat meat products being common sources of contamination . The emergence of antibiotic-resistant strains of L. monocytogenes in these foods poses a serious threat to human health, as it could compromise the effectiveness of antibiotic therapy for listeriosis . Interestingly, the prevalence and patterns of antibiotic resistance in L. monocytogenes isolates from meat and meat products vary across studies and geographic locations . While some studies indicate a relatively low prevalence of antibiotic resistance in L. monocytogenes , others report a high prevalence of resistant and multidrug-resistant strains . This discrepancy underscores the need for ongoing monitoring and surveillance of antibiotic resistance in L. monocytogenes across regions and food sources. Kurpas et al. described a detailed genomic analysis of L. monocytogenes strains isolated from ready-to-eat meats and surfaces in meat processing plants in Poland. The study identified a variety of L. monocytogenes strains that possessed genes encoding resistance to antibiotics from several classes . The fosB gene, responsible for resistance to fosfomycin, was detected in several strains. Genes for tetracycline resistance, such as tetM , have also been identified. L. monocytogenes strains also showed resistance to macrolides due to the presence of the ermB gene. Macrolides, such as erythromycin, are often used to treat respiratory and other bacterial infections, and resistance is a major challenge . The study also identified multidrug-resistant strains that simultaneously possessed genes encoding resistance to antibiotics from different classes, including aminoglycosides (e.g., aacA gene), β-lactams (e.g., blaZ gene), and sulfonamides (e.g., sul1 gene). These strains have been isolated both from ready-to-eat meat products and from surfaces in processing environments, suggesting that meat processing plants may be a reservoir of antibiotic-resistant strains . The detection of multi-resistant strains in processing environments indicates the possibility of long-term contamination at these sites and the risk of transmission of these strains into meat products . Antibiotic-resistant strains, which can cause severe infections in humans, especially in immunocompromised individuals, pose a serious epidemiological threat . Similar results were reported by Maćkiw et al. , who investigated the occurrence and characterization of L. monocytogenes in ready-to-eat meat products in Poland. The study revealed the presence of this pathogen in several food samples. L. monocytogenes strains were tested for resistance to various antibiotics, and the results showed significant resistance to several key antibiotics. Of most concern was resistance to erythromycin and tetracycline, which are frequently used to treat listeriosis infections. Kawacka et al. present a detailed study on the resistance of L. monocytogenes strains isolated from meat products and meat processing environments in Poland. The results showed that most of the analyzed isolates were antibiotic-susceptible to the most-used antibiotics, such as penicillins, macrolides, and tetracyclines, suggesting that current therapies are effective in treating infections associated with food of animal origin . Particular attention was paid to fluoroquinolones, particularly ciprofloxacin, where rare cases of reduced susceptibility were identified, which is worrisome given that fluoroquinolones are key antibiotics in the treatment of many bacterial infections . In contrast, in the study by Skowron et al. assessing the prevalence and antibiotic resistance of L. monocytogenes strains isolated from meat, researchers analyzed samples from pork, beef, and poultry over three years. They found that 2.1% of the collected meat samples were contaminated with L. monocytogenes , with poultry showing the highest contamination levels. The antibiotic resistance of these strains was concerning, as 6.7% were resistant to all five tested antibiotics. Specifically, the highest resistance rates were observed against cotrimoxazole (45.8%), meropenem (43.3%), erythromycin (40.0%), penicillin (25.8%), and ampicillin (17.5%). Only 32.5% of the strains were sensitive to all antibiotics tested. The occurrence of L. monocytogenes in meat and meat products raises serious food safety and public health concerns, especially due to the emergence of antibiotic-resistant strains. The diversity of prevalence rates and resistance patterns depending on the region and type of product indicates the need for continuous monitoring. Studies in Poland have identified resistance genes to multiple classes of antibiotics, raising concerns about the long-term contamination of meat processing environments and the risk of resistant strains contaminating finished products. Multidrug-resistant strains can significantly hinder the treatment of listeriosis infections, which requires strengthening food safety regulations and further research into resistance mechanisms. Furthermore, the findings emphasize the importance of microbiological monitoring and control in meat processing plants to prevent the spread of resistant L. monocytogenes . Regular research into antibiotic resistance among food-related pathogens is crucial, alongside the implementation of appropriate control procedures in food production. Ultimately, further research into resistance mechanisms and their implications is needed to better protect public health. 5.5. Enterobacterales The annual report on trends and sources of zoonoses published in December 2021 by the European Food Safety Authority (EFSA) and the European Center for Disease Prevention and Control (ECDC) shows that nearly one in four foodborne outbreaks in the European Union (EU) in 2020 were caused by Salmonella spp., making this bacterium the most reported causative agent of foodborne outbreaks (694 foodborne outbreaks in 2020) . Salmonella spp. infections in humans are usually caused by the consumption of food of animal origin, mainly eggs, poultry, or pork . An analysis by Gutema et al. show that beef and veal can also be a source of Salmonella spp. infection because these animals are potential asymptomatic carriers. Multidrug-resistant Salmonella poses a serious threat to public health after foodborne infections . Today, such multidrug-resistant strains are increasingly being isolated from beef, pork , and poultry . According to the monitoring of antimicrobial resistance in food and food-producing bacteria, as specified in Commission Implementing Decision 2013/652/EU, Salmonella antibiotic resistance isolated from food and food-producing animals should target broilers, fattening pigs, calves under one year old, and their meat . A study by Szewczyk et al. on the antibiotic resistance of Enterobacterales strains isolated from food showed that most strains (28.0–65.1%) were resistant to a single antibiotic, but 15 strains (34.9%) were resistant to two or more antibiotics. Particularly prominent among them were strains of Escherichia coli and Proteus mirabilis , which were resistant to multiple antibiotics, including beta-lactams (piperacillin, cefuroxime, and cefotaxime), fluoroquinolones, and carbapenems. All isolates were sensitive to gentamicin, and none showed ESBL-type resistance. Strains resistant to high concentrations of antibiotics (256 μg/mL) included Salmonella spp., Hafnia alvei , P. mirabilis , and E. coli . Beta-lactamase-resistant and piperacillin- and cefuroxime-resistant Klebsiella strains (including K. ozaenae and K. rhinoscleromatis ) suggested the ability to produce beta-lactamase enzymes (AmpC and CTX-M), which allows resistance transfer between species. Zarzecka et al. examined in detail the incidence of antibiotic resistance in Enterobacterales strains isolated from raw meat and ready-to-eat meat products. The highest number of isolated strains was identified as E. cloacae (42.4%), followed by E. coli (9.8%), P. mirabilis , S. enterica , P. penneri , and C. freundii (7.6% each), and C. braakii (6.6%), K. pneumoniae , and K. oxytoca (5.4% each). More than half of the isolated strains (52.2%) showed resistance to at least one antibiotic, with the highest number of resistant strains found against amoxicillin with clavulanic acid (28.3%) and ampicillin (19.5%). The ESBL phenotype was found in 26 strains, while the AmpC phenotype was found in 32 strains. The bla CTX-M gene was present in 53.8% of the ESBL-positive strains, and the CIT family gene was present in 43.8% of the AmpC-positive strains . Raw meat has been identified as a key source of resistant strains, posing a significant public health risk, especially in the context of ready-to-eat products, which can be exposed to improper processing, lack of proper sanitary–epidemiological control and improper storage . Both phenotypic analyses, such as antibiotic susceptibility tests, and genotypic analyses were used in the study, which made it possible to accurately determine the resistance profiles of the tested strains. Mąka et al. analyzed the antibiotic resistance profiles of Salmonella strains isolated from retail meat products in Poland between 2008 and 2012. The results of the study showed that more than 90.0% of the strains exhibited resistance to at least one antibiotic, indicating a high level of resistance in the bacterial population. The highest resistance was found against tetracycline, streptomycin, and sulfonamides, reflecting the widespread use of these antibiotics in animal husbandry. Strains of S. typhimurium were more resistant than other serotypes, with about 20.0% of them showing resistance to five or more classes of antibiotics, classifying them as multi-resistant. Resistance to fluoroquinolones, which are often used to treat Salmonella sp. infections in humans, was also found. In a study by Pławińska-Czernak et al. , researchers analyzed the occurrence of multidrug resistance in Salmonella strains isolated from raw meat products such as poultry, beef, and pork. The study showed that 64.3% of the isolates showed resistance to at least three classes of antibiotics, with the highest resistance reported against tetracyclines (56.5%), aminoglycosides (47.8%), beta-lactams (34.8%), and quinolones (30.4%). A key aspect of the study was the identification of genes encoding resistance, including the tetA , blaTEM , aadA , and qnrS genes, which were responsible for resistance to tetracyclines, beta-lactams, aminoglycosides, and quinolones, respectively. The presence of these genes indicates the widespread spread of genetic resistance among food-related pathogens, which poses a serious threat to public health. Sarowska et al. examined the antibiotic resistance and pathogenicity of E. coli strains from poultry farms, retail meat, and human urinary tract infections. The strains showed significant resistance to a variety of antibiotic classes, including β-lactams, tetracyclines, aminoglycosides, fluoroquinolones, and sulfonamides, indicating the widespread selection pressure exerted by antibiotic use in poultry farming. E. coli strains from meat and poultry farms showed some commonalities with isolates causing human infections, suggesting the possibility that potentially pathogenic strains could be transmitted through the food chain. In the presented studies, the researchers highlight the urgent need for continuous monitoring of antibiotic resistance in animal products, along with the implementation of stricter sanitary standards in the food industry. The researchers emphasize educating producers and consumers about the risks of antibiotic resistance to minimize the risk of foodborne infections. Considering the changing resistance profiles, the researchers recommend regular monitoring and restriction of antibiotic use in agriculture, supported by stricter regulations to prevent the spread of resistant strains, especially Salmonella . Multidrug-resistant strains of Salmonella , which are increasingly resistant to tetracyclines, aminoglycosides, and beta-lactams, pose a serious threat to public health. Similarly, high levels of antibiotic resistance have been observed in Enterobacterales strains, including E. coli , isolated from raw meat and animal products. Particular attention was paid to ESBL and AmpC strains, highlighting the importance of reducing antibiotic use in animal husbandry and strengthening sanitary controls in meat processing. The study also highlights the importance of monitoring food safety and zoonotic infection risks to reduce the spread of multidrug-resistant pathogens via food. Campylobacter spp. is a major cause of foodborne illness in humans, which results from improper processing or consumption of undercooked poultry meat . For severe or chronic infections caused by Campylobacter spp., treatment with antibiotics (e.g., fluoroquinolones and macrolides) may be necessary, which is problematic because of the uncontrolled use of these drugs in clinical medicine and animal production . Campylobacter spp. is one of the main causes of foodborne gastroenteritis responsible for zoonosis—campylobacteriosis. Campylobacter , especially Campylobacter jejuni and to a lesser extent Campylobacter coli , is one of the leading causes of foodborne infections worldwide . The main source of infection is contaminated poultry meat , and high contamination poses a threat to public health. It is estimated that 50% to 80% of human campylobacteriosis cases are directly linked to poultry meat, particularly Campylobacter jejuni . In recent years, Campylobacter has been increasing in resistance to antibiotics (especially quinolones and macrolides) due to their widespread use in agriculture . Although campylobacteriosis usually resolves spontaneously, macrolides (erythromycin), fluoroquinolones, and tetracyclines are used in severe cases . Since chickens are the main reservoir of Campylobacter , antibiotic resistance in these bacteria isolated from poultry is of serious concern. The use of antimicrobials in animal production, especially in veterinary medicine, may contribute to the buildup of resistance in human isolates, especially to quinolones . The aim of the study by Woźniak-Biel et al. was to identify Campylobacter strains, isolated from turkeys and chickens, using polymerase chain reaction (PCR) and matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) methods, and assess their antibiotic resistance. The results obtained from MALDI-TOF were consistent with those from multiplex PCR. There was 100% resistance to ciprofloxacin in strains from turkeys and chickens, and 58.1% and 78.6% resistance to tetracycline in these groups, respectively. No multidrug-resistant strains were detected, and all ciprofloxacin-resistant strains had a mutation in the gyrA gene at the Thr-86 position. The presence of the tetO gene was present in 71.0% of turkey strains and 100% of chickens, and this gene was also found in five turkey strains and three chickens that were sensitive to tetracycline. The results indicate a high prevalence of Campylobacter strains that are phenotypically and genetically resistant to fluoroquinolones and tetracycline. A study by Maćkiw et al. on the antibiotic resistance of C. jejuni and C. coli strains isolated from food in Poland showed that Campylobacter spp. is often isolated from poultry, which is the main source of human infections with these bacteria. High levels of resistance to fluoroquinolones, including ciprofloxacin, were found, which is in line with trends observed in other European countries. Resistance to tetracyclines was also common, which may be due to the widespread use of these antibiotics in animal husbandry. The tet (O) genes responsible for resistance to tetracyclines and gyrA associated with resistance to fluoroquinolones were identified. Some strains showed resistance to macrolides such as erythromycin, but this was less prevalent compared to fluoroquinolones and tetracyclines. It was also noted that multidrug resistance was relatively common. These results suggest the need to monitor Campylobacter sp. resistance in food to prevent the spread of resistant strains, which can threaten public health. A study by Wieczorek and Osek analyzing the antibiotic resistance of C. jejuni and C. coli strains of poultry carcass samples collected between 2009 and 2013 showed that 54.4% of samples were positive for Campylobacter . Resistance to ciprofloxacin was 81.6%, to tetracycline 56.1%, and only 2.4% of isolates were resistant to erythromycin. In contrast, resistance was higher among C. coli than C. jejuni , and an increase in resistance to ciprofloxacin and tetracycline was noted over the five-year study period. A later study by Wieczorek et al. on the prevalence and antibiotic resistance of Campylobacter strains isolated from chicken carcasses in Poland between 2014 and 2018 reported that 53.4% of samples (in total 2367 samples collected from slaughterhouses across the country) were positive for Campylobacter . Mainly, C. coli (31.2%) and C. jejuni (22.2%) were identified. The strains showed high resistance to ciprofloxacin (93.1%), nalidixic acid (92.3%), and tetracycline (70.9%). Only a small percentage of isolated strains were resistant to erythromycin (4.2%), with C. coli (6.4%) showing more resistance than C. jejuni (1.1%). Multidrug resistance was found in 25.1% of C. coli and 20.6% of C. jejuni strains. The study showed an increase in the percentage of multidrug-resistant strains compared to earlier years, indicating the necessity of taking measures to control Campylobacter at the poultry slaughter stage and restricting the use of antibiotics in poultry production. Rożynek et al. analyzed in detail the emergence of macrolide-resistant Campylobacter strains in poultry meat in Poland and the resistance mechanisms responsible for the problem. Macrolides, such as erythromycin, are key antibiotics used to treat infections caused by these bacteria . The study found a significant number of strains resistant to macrolides, which poses an important therapeutic challenge. The mechanism of resistance to these antibiotics was mainly related to mutations in domain V of the 23S rRNA gene, which encodes the ribosomal subunit responsible for macrolide binding. These mutations, particularly at nucleotide positions 2074 and 2075, lead to a reduced ability of macrolides to inhibit bacterial protein synthesis . Also identified were erm (B) genes that encode methyltransferases, enzymes that modify ribosomes and cause macrolide resistance. In addition, other resistance mechanisms, such as the pumping of antibiotics out of bacterial cells by the efflux pump CmeABC, were also identified as an important factor in the development of resistance. The study also found a link between resistance and intensive antibiotic use in poultry farming, which promotes the selection of resistant strains. The authors emphasize the need to monitor antibiotic resistance and to introduce stricter regulations on the use of macrolides in animal food production to prevent the further spread of resistant strains of Campylobacter spp. Another source of Campylobacter is beef and pork. It was reported that the prevalence of Campylobacter spp. in retail beef products was about 10.0% , whereas its prevalence in beef and pork carcasses was 10.0% and 30.0%, respectively . Antibiotic profiling revealed that Campylobacter isolated from pork and cattle carcasses during the slaughter process in Poland most often showed resistance to quinolones (57.1%) and tetracycline (51.4%) . One strain of C. coli from a pork sample was resistant to three antibiotics simultaneously. This is worrisome given the public health concerns arising from the increasing antibiotic resistance of microorganisms to antimicrobials that are used as first-line drugs in the clinical treatment of campylobacteriosis . As reported by Wieczorek and Osek , 100% of Campylobacter strains isolated from pork and beef carcasses were sensitive to gentamicin and chloramphenicol. Significant differences were found between C. coli and C. jejuni , especially in resistance to streptomycin ( p < 0.001) and tetracycline ( p < 0.05). All C. jejuni isolates were sensitive to streptomycin, while 80.5% and 66.7% of C. coli strains from pigs and cattle, respectively, were resistant. C. coli also showed higher resistance to tetracycline, quinolones (nalidixic acid), and fluoroquinolones (ciprofloxacin). Four C. coli isolates from pig carcasses were resistant to erythromycin. Multidrug resistance was found in 61.4% of strains, with the highest levels of resistance to quinolones, fluoroquinolones, aminoglycosides, and tetracyclines, mainly in C. coli . Campylobacter spp. is also prevalent in geese and poses a potential risk for human campylobacteriosis through the consumption of goose meat. Campylobacter was found in 83.3% of goose cecum samples and 52.5% of neck skin samples from carcasses, with C. jejuni being the predominant species (87.7% of isolates) . The isolates exhibited high levels of antimicrobial resistance, particularly to quinolones (90.8%) and tetracycline (79.8%), while resistance to macrolides was rare (0.6%) . This aligns with findings from other studies showing high resistance of Campylobacter isolates to ciprofloxacin, tetracycline, and nalidixic acid in various bird species . Campylobacter spp. in meat and meat products in Poland indicates the presence of this pathogen in both beef, pork, and poultry, with poultry meat being the main source of human infections. Studies have shown significant levels of antibiotic resistance, especially to quinolones and tetracycline, posing a serious public health challenge. Macrolide resistance, although rarer, is also a problem, especially in C. coli . Campylobacter strains, which have also shown multidrug resistance, underscoring the need for the close monitoring of antibiotic resistance and limiting the use of antibiotics in animal production. The increase in the number of multi-resistant strains in recent years poses an epidemiological threat and calls for action to control Campylobacter at all stages of food production. Antibiotic resistance in staphylococci isolated from meat and meat products has become an important public health problem worldwide . Both coagulase-positive staphylococci (CPS) and coagulase-negative staphylococci (CNS) have been found to carry antibiotic-resistant genes, posing a potential threat to consumers . Studies have shown a high prevalence of antibiotic-resistant Staphylococcus species in a variety of meat products, including chicken, beef, and processed meat products . Interestingly, the distribution of antibiotic resistance varies by Staphylococcus species and meat type . The pathogenesis of CNS species depends on the factors required for their commensal lifestyle, and one such factor that increases the importance of these microorganisms in the pathology of mammals and birds is their resistance to numerous antimicrobial agents . Poultry has been identified as one of the most important carriers of foodborne pathogens and antimicrobial resistance genes . A detailed analysis of resistance genes in staphylococci associated with livestock revealed a wide variety of these genes. These mainly include genes known to be commonly present in staphylococci of human and animal origin, such as the beta-lactamase gene blaZ , the methicillin resistance gene mecA , the tetracycline resistance genes tet (K), tet (L), tet (M), and tet (O), macrolysine–lincosamide–estreptogramin B (MLSB) resistance genes erm (A) and erm (B), erythromycin-inducible resistance gene msr A/B, aac (6′) Ie-aph (2″) Ia gene of aminoglycoside-modifying enzymes, and florfenicol/chloramphenicol resistance gene ( cfr ) . Methicillin resistance in Staphylococcus is now a global problem . In CNS, the mechanisms of resistance are like those observed in S. aureus . However, resistance mediated by the mecA gene in CNS is often expressed at lower levels compared to methicillin-resistant S. aureus (MRSA) . This lower expression can complicate its detection, highlighting the need for further studies to understand and address these diagnostic challenges . Pyzik et al. analyzed antibiotic resistance in coagulase-negative staphylococci isolated from poultry in Poland. CNS, despite being less pathogenic than coagulase-positive strains, is becoming a significant health threat due to increasing antibiotic resistance . The study detected numerous resistance genes, including the mecA gene, suggesting the presence of methicillin-resistant strains of coagulase-negative staphylococci (MR-CNS). Also identified were the ermA , ermB , and ermC genes, which confer resistance to macrolides, lincosamides, and streptogramins, limiting the effectiveness of these antibiotic groups in treating infections. The tetK and tetM genes, associated with resistance to tetracyclines, were also commonly present, indicating widespread CNS resistance to these frequently used antibiotics in animal treatment. In addition, the study revealed the presence of blaZ genes encoding beta-lactamases, which leads to the degradation of beta-lactam antibiotics such as penicillins, further limiting the therapeutic options. Also, in a study by Chajęcka-Wierzchowska et al., the pheno- and geno-typical antimicrobial resistance profile of CNS from ready-to-eat cured meat was studied . Mainly, S. epidermidis and S. xylosus were identified. Phenotypic analysis showed that isolates exhibited resistance to FOX, TGC, QD, DA, TET, CN, RD, CIP, W, and SXT, containing the following genes encoding antibiotic resistance in their genome: mec(A) , tet(L) , tet(M) , and tet(K) . Notably, two strains of the S. xylosus species showed simultaneous antibiotic resistance from nine different classes. This species is a component of the cultures used in the production of meat products, so it also becomes reasonable to control the strains used as starter and protective cultures, which have not been regulated for years and are not mandatorily tested for AMR . A study by Krupa et al. analyzed the antibiotic resistance of S. aureus strains isolated from poultry meat in Poland . The study found that a significant percentage of these strains showed resistance to oxacillin, indicating the presence of methicillin-resistant strains of Staphylococcus aureus (MRSA). The poultry meat tested in the study also contained MRSA strains, posing a potential risk to consumers. MRSA strains are a serious public health risk due to limited treatment options for infections caused by them . The study observed genotypic diversity in these strains, suggesting multiple sources of infection and transmission between livestock and humans. Another study by Krupa et al. focused on the population structure and oxacillin resistance in S. aureus strains from pork in southwestern Poland. The study found the presence of antibiotic-resistant S. aureus strains, including methicillin-resistant S. aureus , which exhibit resistance to oxacillin. This resistance is associated with the presence of the mecA gene . Other resistance genes such as erm (encoding macrolide resistance) and tet (encoding tetracycline resistance) were also detected, indicating multidrug resistance in some strains. Phylogenetic analysis revealed a diversity of S. aureus clones. Podkowik et al. analyzed in detail the presence of antibiotic-resistant genes in staphylococci isolated from ready-to-eat meat products such as sausages, hams, and pates. The study revealed the presence of numerous resistance genes, suggesting that these products may harbor pathogens resistant to antibiotic treatment. Particular attention was paid to the mecA gene. In addition, erm genes encoding resistance to macrolides, lincosamides, and streptogramins were detected, further complicating therapy, as these antibiotics are often used to treat staphylococcal infections. Tet genes have also been identified that cause resistance to tetracyclines, a group of antibiotics widely used in veterinary medicine and agriculture, suggesting that the use of these drugs in animal husbandry may contribute to the spread of resistant strains in food . The presence of the blaZ gene, which encodes beta-lactamases, enzymes that degrade beta-lactam antibiotics (such as penicillins), indicates a wide range of resistance, further limiting treatment options for infections. The study underscores that the high prevalence of these genes in ready-to-eat products poses a real threat to public health, as consumption of contaminated foods can lead to infections that are difficult to treat. The presence of antibiotic-resistant staphylococci in meat and meat products is a growing food safety concern. The high prevalence of resistance genes and multidrug-resistant strains highlights the need for improved monitoring systems and stricter regulation of antibiotic use in animal husbandry. These findings highlight the necessity of ongoing surveillance of MRSA and other resistant bacteria in animal products to mitigate the risk of transmission to humans and prevent the spread of resistance in the food chain. Additionally, further research is required to better understand resistance mechanisms, develop effective strategies to control them, and address this complex public health issue in the context of food production and processing. Enterococci, which are the natural intestinal flora of mammals, birds, and humans, are often responsible for nosocomial infections such as urinary tract infections, endocarditis, and catheter- and wound-related infections . The most frequently isolated species are Enterococcus faecalis and Enterococcus faecium , whereas Enterococcus gallinarum and Enterococcus casseliflavus appear less frequently . In poultry, enterococci cause, among others, endocarditis and arthritis . The use of antibiotics in human and veterinary medicine promotes the selection of resistant strains, which can transfer resistance genes between different bacteria, posing a risk to human health . In Europe, due to resistance to vancomycin and aminoglycosides, infections caused by enterococci are a serious clinical problem . An example is the use of avoparcin in animal feed, which contributed to the increase in vancomycin resistance before its use was banned in 1997 . Molecular mechanisms of resistance include genes such as vanA , vanB , tetM , or ermB , and biofilm-forming enterococci are particularly difficult to control . Biofilms, which are complex communities of microorganisms, protect bacteria from antibiotics and the immune system, making it difficult to treat infections such as wounds or urinary tract infections . The ability to form a biofilm also increases contamination in the food industry and promotes gene transfer between bacteria . A study by Chajęcka-Wierzechowska et al. analyzed 390 samples of ready-to-eat meat products, of which Enterococcus strains were detected in 74.1%. A total of 302 strains were classified: E. faecalis (48.7%), E. faecium (39.7%), E. casseliflavus (4.3%), E. durans (3.0%), E. hirae (2.6%), and another Enterococcus spp. (1.7%). A high percentage of isolates showed resistance to streptomycin (45.0%), erythromycin (42.7%), fosfomycin (27.2%), rifampicin (19.2%), tetracycline (36.4%), and tigecycline (19.9%). The most frequently detected resistance gene was ant(6′)-Ia (79.6%). Other significant genes were aac(6′)-Ie-aph(2″)-Ia (18.5%), aph(3″)-IIIa (16.6%), and tetracycline resistance genes: tetM (43.7%), tetL (32.1%), and tetK (14.6%). The ermB and ermA genes were found in 33.8% and 18.9% of isolates, respectively, and almost half of the isolates contained the conjugative transposon Tn916/Tn1545. The study revealed that enterococci are widespread in ready-to-eat meat products. Many of the isolated strains show antibiotic resistance and carry resistance genes that pose a potential risk due to their ability to transmit resistance genes to bacteria present in the human body, which may interact with enterococci isolated from food products. Knowledge of antibiotic resistance in food strains outside the E. faecalis and E. faecium species is very limited . The experiments conducted in this study analyzed in detail the antibiotic resistance of strains of species such as E. casseliflavus , E. durans , E. hirae , and E. gallinarum . The results indicate that these species may also harbor resistance genes to several important classes of antibiotics. Ławniczek-Wałczyk et al. analyzed the prevalence of antibiotic-resistant Enterococcus sp. strains in meat and the production environment of meat plants in Poland. Different Enterococcus species were identified, including E. faecalis and E. faecium . These strains showed significant antibiotic resistance, especially to erythromycin, tetracycline, and vancomycin. Resistance to vancomycin is of particular concern because vancomycin is often the drug of last resort in the treatment of infections caused by multidrug-resistant bacteria. Resistance genes such as vanA , vanB (for vancomycin), and ermB (for erythromycin) are commonly present in strains from both environmental and meat samples. A study by Stępień-Pyśniak et al. examined the prevalence and antibiotic resistance patterns of Enterococcus strains isolated from poultry. It focused on E. faecalis and E. faecium , which are common in poultry and known for their antibiotic resistance. The results showed that a significant proportion of isolates exhibited multidrug resistance, particularly to antibiotics frequently used in both veterinary and human medicine. High resistance rates were observed for antibiotics such as erythromycin, tetracycline, and vancomycin, with some strains showing resistance to multiple classes of antibiotics. Woźniak-Biel et al. analyzed the antibiotic resistance of Enterococcus strains isolated from turkeys. In the study, 51 strains from turkeys showed high resistance to tetracycline (94.1%) and erythromycin (76.5%). About 43.1% of the strains were multi-resistant, and 15.7% showed vancomycin resistance, associated with the presence of the vanA gene. A macrolide resistance gene ( ermB ) was also detected in 68.6% of the strains. All isolates showed the ability to form biofilms, which may contribute to their greater resistance and difficulty in treatment. The studies presented the widespread occurrence of antibiotic-resistant Enterococcus strains in meat and meat products, particularly in ready-to-eat foods and poultry. Multiple studies consistently show that E. faecalis and E. faecium are the most frequently isolated species, with significant resistance to antibiotics such as tetracycline, erythromycin, and vancomycin. The research points to the frequent presence of antibiotic-resistant genes like vanA , ermB , tetM , and ermA . In addition to their high resistance levels, these strains often exhibit the ability to form biofilms, further complicating their treatment and increasing the risk of gene transfer between bacteria. Studies conducted in Poland have revealed that both environmental and meat production facilities are affected by the presence of antibiotic-resistant enterococci, particularly those resistant to clinically important antibiotics like vancomycin, which is often a last-resort treatment. This resistance poses a significant threat to public health by facilitating the transmission of resistant strains through the food chain, from animals to humans. L. monocytogenes , a foodborne pathogen that causes listeriosis zoonosis, is increasingly being detected in meat and meat products, raising concerns about food safety and public health. Studies have shown different rates of L. monocytogenes in different meats, with chicken, pork, and ready-to-eat meat products being common sources of contamination . The emergence of antibiotic-resistant strains of L. monocytogenes in these foods poses a serious threat to human health, as it could compromise the effectiveness of antibiotic therapy for listeriosis . Interestingly, the prevalence and patterns of antibiotic resistance in L. monocytogenes isolates from meat and meat products vary across studies and geographic locations . While some studies indicate a relatively low prevalence of antibiotic resistance in L. monocytogenes , others report a high prevalence of resistant and multidrug-resistant strains . This discrepancy underscores the need for ongoing monitoring and surveillance of antibiotic resistance in L. monocytogenes across regions and food sources. Kurpas et al. described a detailed genomic analysis of L. monocytogenes strains isolated from ready-to-eat meats and surfaces in meat processing plants in Poland. The study identified a variety of L. monocytogenes strains that possessed genes encoding resistance to antibiotics from several classes . The fosB gene, responsible for resistance to fosfomycin, was detected in several strains. Genes for tetracycline resistance, such as tetM , have also been identified. L. monocytogenes strains also showed resistance to macrolides due to the presence of the ermB gene. Macrolides, such as erythromycin, are often used to treat respiratory and other bacterial infections, and resistance is a major challenge . The study also identified multidrug-resistant strains that simultaneously possessed genes encoding resistance to antibiotics from different classes, including aminoglycosides (e.g., aacA gene), β-lactams (e.g., blaZ gene), and sulfonamides (e.g., sul1 gene). These strains have been isolated both from ready-to-eat meat products and from surfaces in processing environments, suggesting that meat processing plants may be a reservoir of antibiotic-resistant strains . The detection of multi-resistant strains in processing environments indicates the possibility of long-term contamination at these sites and the risk of transmission of these strains into meat products . Antibiotic-resistant strains, which can cause severe infections in humans, especially in immunocompromised individuals, pose a serious epidemiological threat . Similar results were reported by Maćkiw et al. , who investigated the occurrence and characterization of L. monocytogenes in ready-to-eat meat products in Poland. The study revealed the presence of this pathogen in several food samples. L. monocytogenes strains were tested for resistance to various antibiotics, and the results showed significant resistance to several key antibiotics. Of most concern was resistance to erythromycin and tetracycline, which are frequently used to treat listeriosis infections. Kawacka et al. present a detailed study on the resistance of L. monocytogenes strains isolated from meat products and meat processing environments in Poland. The results showed that most of the analyzed isolates were antibiotic-susceptible to the most-used antibiotics, such as penicillins, macrolides, and tetracyclines, suggesting that current therapies are effective in treating infections associated with food of animal origin . Particular attention was paid to fluoroquinolones, particularly ciprofloxacin, where rare cases of reduced susceptibility were identified, which is worrisome given that fluoroquinolones are key antibiotics in the treatment of many bacterial infections . In contrast, in the study by Skowron et al. assessing the prevalence and antibiotic resistance of L. monocytogenes strains isolated from meat, researchers analyzed samples from pork, beef, and poultry over three years. They found that 2.1% of the collected meat samples were contaminated with L. monocytogenes , with poultry showing the highest contamination levels. The antibiotic resistance of these strains was concerning, as 6.7% were resistant to all five tested antibiotics. Specifically, the highest resistance rates were observed against cotrimoxazole (45.8%), meropenem (43.3%), erythromycin (40.0%), penicillin (25.8%), and ampicillin (17.5%). Only 32.5% of the strains were sensitive to all antibiotics tested. The occurrence of L. monocytogenes in meat and meat products raises serious food safety and public health concerns, especially due to the emergence of antibiotic-resistant strains. The diversity of prevalence rates and resistance patterns depending on the region and type of product indicates the need for continuous monitoring. Studies in Poland have identified resistance genes to multiple classes of antibiotics, raising concerns about the long-term contamination of meat processing environments and the risk of resistant strains contaminating finished products. Multidrug-resistant strains can significantly hinder the treatment of listeriosis infections, which requires strengthening food safety regulations and further research into resistance mechanisms. Furthermore, the findings emphasize the importance of microbiological monitoring and control in meat processing plants to prevent the spread of resistant L. monocytogenes . Regular research into antibiotic resistance among food-related pathogens is crucial, alongside the implementation of appropriate control procedures in food production. Ultimately, further research into resistance mechanisms and their implications is needed to better protect public health. The annual report on trends and sources of zoonoses published in December 2021 by the European Food Safety Authority (EFSA) and the European Center for Disease Prevention and Control (ECDC) shows that nearly one in four foodborne outbreaks in the European Union (EU) in 2020 were caused by Salmonella spp., making this bacterium the most reported causative agent of foodborne outbreaks (694 foodborne outbreaks in 2020) . Salmonella spp. infections in humans are usually caused by the consumption of food of animal origin, mainly eggs, poultry, or pork . An analysis by Gutema et al. show that beef and veal can also be a source of Salmonella spp. infection because these animals are potential asymptomatic carriers. Multidrug-resistant Salmonella poses a serious threat to public health after foodborne infections . Today, such multidrug-resistant strains are increasingly being isolated from beef, pork , and poultry . According to the monitoring of antimicrobial resistance in food and food-producing bacteria, as specified in Commission Implementing Decision 2013/652/EU, Salmonella antibiotic resistance isolated from food and food-producing animals should target broilers, fattening pigs, calves under one year old, and their meat . A study by Szewczyk et al. on the antibiotic resistance of Enterobacterales strains isolated from food showed that most strains (28.0–65.1%) were resistant to a single antibiotic, but 15 strains (34.9%) were resistant to two or more antibiotics. Particularly prominent among them were strains of Escherichia coli and Proteus mirabilis , which were resistant to multiple antibiotics, including beta-lactams (piperacillin, cefuroxime, and cefotaxime), fluoroquinolones, and carbapenems. All isolates were sensitive to gentamicin, and none showed ESBL-type resistance. Strains resistant to high concentrations of antibiotics (256 μg/mL) included Salmonella spp., Hafnia alvei , P. mirabilis , and E. coli . Beta-lactamase-resistant and piperacillin- and cefuroxime-resistant Klebsiella strains (including K. ozaenae and K. rhinoscleromatis ) suggested the ability to produce beta-lactamase enzymes (AmpC and CTX-M), which allows resistance transfer between species. Zarzecka et al. examined in detail the incidence of antibiotic resistance in Enterobacterales strains isolated from raw meat and ready-to-eat meat products. The highest number of isolated strains was identified as E. cloacae (42.4%), followed by E. coli (9.8%), P. mirabilis , S. enterica , P. penneri , and C. freundii (7.6% each), and C. braakii (6.6%), K. pneumoniae , and K. oxytoca (5.4% each). More than half of the isolated strains (52.2%) showed resistance to at least one antibiotic, with the highest number of resistant strains found against amoxicillin with clavulanic acid (28.3%) and ampicillin (19.5%). The ESBL phenotype was found in 26 strains, while the AmpC phenotype was found in 32 strains. The bla CTX-M gene was present in 53.8% of the ESBL-positive strains, and the CIT family gene was present in 43.8% of the AmpC-positive strains . Raw meat has been identified as a key source of resistant strains, posing a significant public health risk, especially in the context of ready-to-eat products, which can be exposed to improper processing, lack of proper sanitary–epidemiological control and improper storage . Both phenotypic analyses, such as antibiotic susceptibility tests, and genotypic analyses were used in the study, which made it possible to accurately determine the resistance profiles of the tested strains. Mąka et al. analyzed the antibiotic resistance profiles of Salmonella strains isolated from retail meat products in Poland between 2008 and 2012. The results of the study showed that more than 90.0% of the strains exhibited resistance to at least one antibiotic, indicating a high level of resistance in the bacterial population. The highest resistance was found against tetracycline, streptomycin, and sulfonamides, reflecting the widespread use of these antibiotics in animal husbandry. Strains of S. typhimurium were more resistant than other serotypes, with about 20.0% of them showing resistance to five or more classes of antibiotics, classifying them as multi-resistant. Resistance to fluoroquinolones, which are often used to treat Salmonella sp. infections in humans, was also found. In a study by Pławińska-Czernak et al. , researchers analyzed the occurrence of multidrug resistance in Salmonella strains isolated from raw meat products such as poultry, beef, and pork. The study showed that 64.3% of the isolates showed resistance to at least three classes of antibiotics, with the highest resistance reported against tetracyclines (56.5%), aminoglycosides (47.8%), beta-lactams (34.8%), and quinolones (30.4%). A key aspect of the study was the identification of genes encoding resistance, including the tetA , blaTEM , aadA , and qnrS genes, which were responsible for resistance to tetracyclines, beta-lactams, aminoglycosides, and quinolones, respectively. The presence of these genes indicates the widespread spread of genetic resistance among food-related pathogens, which poses a serious threat to public health. Sarowska et al. examined the antibiotic resistance and pathogenicity of E. coli strains from poultry farms, retail meat, and human urinary tract infections. The strains showed significant resistance to a variety of antibiotic classes, including β-lactams, tetracyclines, aminoglycosides, fluoroquinolones, and sulfonamides, indicating the widespread selection pressure exerted by antibiotic use in poultry farming. E. coli strains from meat and poultry farms showed some commonalities with isolates causing human infections, suggesting the possibility that potentially pathogenic strains could be transmitted through the food chain. In the presented studies, the researchers highlight the urgent need for continuous monitoring of antibiotic resistance in animal products, along with the implementation of stricter sanitary standards in the food industry. The researchers emphasize educating producers and consumers about the risks of antibiotic resistance to minimize the risk of foodborne infections. Considering the changing resistance profiles, the researchers recommend regular monitoring and restriction of antibiotic use in agriculture, supported by stricter regulations to prevent the spread of resistant strains, especially Salmonella . Multidrug-resistant strains of Salmonella , which are increasingly resistant to tetracyclines, aminoglycosides, and beta-lactams, pose a serious threat to public health. Similarly, high levels of antibiotic resistance have been observed in Enterobacterales strains, including E. coli , isolated from raw meat and animal products. Particular attention was paid to ESBL and AmpC strains, highlighting the importance of reducing antibiotic use in animal husbandry and strengthening sanitary controls in meat processing. The study also highlights the importance of monitoring food safety and zoonotic infection risks to reduce the spread of multidrug-resistant pathogens via food. Alternatives to antibiotic therapy in agriculture and animal husbandry are increasingly being explored to combat the rising challenge of antimicrobial resistance and the negative environmental impacts of excessive antibiotic use . 6.1. Probiotics and Prebiotics Probiotics and prebiotics represent a promising alternative . Probiotics are live microorganisms, typically beneficial bacteria, which confer health benefits to the host when administered adequately . Several health and nutritional benefits have been observed to be provided to animals by probiotics. They promote animal growth and maturation and increase feed intake, digestibility, and performance . Other benefits include improved health outcomes and immune responses , egg production , meat yield and its quality , and milk composition and its production in ruminants . In turn, prebiotics are compounds that induce the growth or activity of beneficial microorganisms, particularly in the gut . When used together, as symbiotics, they promote gut health by enhancing the balance of gut microbiota, which is crucial for maintaining the immune system’s strength . According to Low et al. , these supplements can enhance animal health, improve feed efficiency, and boost growth without relying on antibiotics. This approach is particularly promising in preventing intestinal infections and supporting overall gut immunity, thereby reducing the need for antibiotic interventions. Gupta et al. suggested that symbiotics can help mitigate the need for antibiotics by boosting the animal’s natural defenses against infections. This dual approach is seen as an effective way to improve productivity and animal welfare without the overuse of antibiotics, particularly in poultry and swine production. Śmiałek et al. used a multispecies probiotic (Lavipan, JHJ, Poland) containing Lactococcus lactis , Carnobacterium divergens , Lactiplantibacillus casei , Lactiplantibacillus plantarum , and Saccharomyces cerevisiae in broiler feeding to effectively reduce contamination of poultry with Campylobacter spp. The use of the probiotic reduced colonization of the chickens’ digestive tract and reduced environmental and poultry carcass contamination. In addition, the probiotic supported the poultry’s immune system, improving carcass hygiene parameters and reducing the risk of pathogen transmission in the food chain. The results of the presented research highlight the potential of probiotics as an alternative to antibiotics in poultry farming, supporting sustainable agricultural practices and food safety . Future research should focus on multi-strain probiotic formulations tailored to specific livestock species and regional conditions. Advances in genetic engineering could lead to probiotics with enhanced functionalities, such as targeted pathogen inhibition or increased gut resilience . 6.2. Bacteriophages Bacteriophages (phages) are emerging as an innovative and natural alternative to traditional antibiotics, particularly in the battle against multidrug-resistant (MDR) bacteria. These viruses specifically infect and lyse bacterial cells, with a high degree of host specificity, making them valuable tools for targeting pathogenic bacteria without disrupting beneficial microbiota . In agriculture, phages are being explored for controlling bacterial infections in livestock and crops, offering environmentally friendly solutions. They can be administered via water, feed, or directly to infected plants or animals, making them versatile agents in sustainable farming systems . Recent advancements include genetically engineered phages and phage-derived enzymes like lysins, which significantly enhance antibacterial efficacy by breaking down bacterial cell walls. Such innovations have shown promise not only in agriculture but also in clinical settings for wound care and biofilm eradication, where MDR pathogens pose severe threats . Phage–antibiotic synergy (PAS) is another area of growing interest, where the combination of phages and sub-lethal doses of antibiotics enhances bacterial clearance while reducing the likelihood of resistance development . Phage therapy’s specificity is particularly advantageous in addressing biofilms, which are notoriously resistant to antibiotics. Phage cocktails, designed to target multiple bacterial strains, have shown substantial efficacy in disrupting biofilms in healthcare settings . Additionally, bacteriophages offer a unique potential for antivirulence strategies, where phage-induced bacterial resistance may simultaneously reduce bacterial fitness and virulence, further attenuating infections . Despite their vast potential, challenges persist. Regulatory barriers, the need for standardized safety profiles, and the risk of phage resistance require further research and policy development . Nevertheless, with advancements in genetic engineering and better understanding of phage biology, bacteriophages hold immense promise as versatile and sustainable alternatives to antibiotics in diverse applications. 6.3. Natural Compounds Natural compounds play a pivotal role in addressing the global challenge of antimicrobial resistance (AMR), as demonstrated by their diverse mechanisms of action and potential benefits widely discussed in the scientific literature. The use of natural compounds in combating antibiotic resistance is widely discussed in the scientific literature, demonstrating their various mechanisms of action and potential benefits. For example, polyphenolic compounds such as curcumin, resveratrol, and gallic acid can act as photosensitizers in photodynamic therapy, effectively destroying bacterial biofilms and aiding in the treatment of infections . Marine-derived products, on the other hand, offer unique chemical structures that can be effective against multidrug-resistant bacteria . Phytogenic compounds derived from medicinal plants, including essential oils, alkaloids, and phenolic compounds, have gained traction for their antimicrobial, antioxidant, and anti-inflammatory properties. These plant-based alternatives include essential oils, alkaloids, and phenolic compounds, which possess antimicrobial, antioxidant, and anti-inflammatory properties. Gao et al. explained that phenolic compounds from medicinal plants can inhibit bacterial growth and modulate the gut microbiome in animals, thus supporting health and growth. These natural extracts are also being studied for their role in enhancing the animal immune system, which further reduces the need for antibiotics . Phytogenic is seen as a sustainable alternative that can improve both animal welfare and productivity. The use of phytotherapeutics has pointed to their bactericidal properties and ability to reverse drug resistance, although challenges such as overexploitation of resources and climate impacts limit their wider use . Another innovative approach involves essential oils (EOs), which show multifaceted bactericidal activity and potential as coatings in me-too devices to prevent infections, highlighting their versatility and efficacy compared to synthetic antibiotics . Moreover, molecular docking studies of plant-derived compounds against specific pathogenic targets illustrate their untapped potential in combating protozoan and bacterial resistance . Plant extracts and secondary metabolites, such as terpenoids or alkaloids, also show promising antimicrobial activity, as detailed in reviews of their use as bioactive food preservatives and potential therapeutic candidates . 6.4. Enzymes and Peptides Another promising approach is the use of ribosomal antimicrobial peptides (AMPs), which disrupt bacterial processes and serve as a potential alternative to conventional antibiotics . AMPs are known for their multifunctional role in disrupting bacterial processes, offering a promising alternative to conventional antibiotics . AMPs, along with enzymes like lysozymes, can be incorporated into animal feed to reduce pathogenic bacteria in the gut and improve growth performance while avoiding resistance development . Enzymes and antimicrobial peptides also show great potential as alternatives to antibiotics. Enzymes, such as proteases and lysozymes, help break down microbial cell walls, while AMPs are small proteins found naturally in many organisms that exhibit broad-spectrum antimicrobial activity. Wang et al. emphasized that these compounds can be incorporated into animal feed to reduce pathogenic bacteria in the gut and improve the overall growth performance of animals. Synthetic AMPs offer a natural, non-toxic method of reducing pathogen loads without leading to resistance, making them an ideal candidate for replacing antibiotics in animal production systems . Zhang et al. highlighted synthetic AMPs as a promising advancement, combining stability with cost-effectiveness. In addition, natural products such as antimicrobial peptides and fungal-derived compounds offer new opportunities to modulate multidrug resistance . It is also important to consider biotechnological modifications of natural sources to increase their availability and effectiveness . Research on nano-antioxidants and phage therapy as additional methods to combat AMR is also groundbreaking . The past successes of naturally derived antibiotics underscore the importance of integrating traditional knowledge with modern research methods . All this evidence points to the crucial role of natural products in the development of future antimicrobial therapies. 6.5. Vaccines The research also observes the design of vaccines with the specific purpose of minimizing antibiotic resistance for specific groups of microorganisms. Śmiałek et al. indicated that the use of a live attenuated vaccine against E. coli can effectively reduce the use of antibiotics in broiler breeding. The use of the vaccine showed a significant reduction in the number of multi-resistant E. coli strains, increasing their sensitivity to antibiotics. At the same time, vaccinated broilers showed better production parameters, such as faster weight gain and lower mortality, and the vaccination did not adversely affect the effectiveness of other vaccines. The results suggest that the routine use of E. coli vaccine in immunoprophylaxis programs can help improve flock health, reduce the risk of antibiotic resistance, and improve production performance, which is crucial for sustainable poultry farming management . 6.6. Emerging Innovations One innovative solution is the use of nanoparticles (NPs), which exhibit antibacterial properties, raising hopes for their use in the fight against drug-resistant pathogens . Thanks to their properties, they not only have antibacterial effects themselves, but can also be carriers for antibiotics and natural antimicrobial compounds . Examples of such nanoparticles include Ag-NP, Zn-NP, Au-NP, Al-NP, Cu-NP, and Ti-NP, and metal oxide nanoparticles such as ZnO-NP, CdO-NP, CuO-NP, and TiO 2 -NP, among others. All these structures have shown effectiveness in destroying bacteria . A study by Joost et al. confirmed that treatment with TiO 2 nanoparticles can lead to an increase in the volume of bacterial cells, causing damage to their cell membranes and death. They have also been shown to be effective against multidrug-resistant (MDR) pathogens such as E. coli , K. pneumoniae , Pseudomonas aeruginosa , Acinetobacter baumannii , methicillin-resistant S. aureus , and E. faecalis . The mechanism involves the generation of reactive oxygen species (ROS), which leads to oxidative stress in pathogen cells . Nanoparticles are also being explored as carriers for antibiotics to increase the effectiveness of therapy and minimize the risk of developing bacterial resistance . The conjugation of antibiotics, such as ampicillin, kanamycin, or streptomycin, with gold NPs has achieved lower minimum inhibitory concentrations against Gram-positive and Gram-negative bacteria than with the drugs used alone . Similarly, vancomycin-loaded gold nanoparticles showed enhanced efficacy against strains resistant to this antibiotic by disrupting the stability of bacterial cell membranes . Studies have also shown that bimetallic nanoparticles, such as combinations of two different metals, are more effective than their monometallic counterparts . They have better electron, optical, and catalytic properties, which translates into many times greater efficacy against MDR pathogens while reducing the required therapeutic dose The growing focus on alternatives to antibiotics in agriculture and animal husbandry is a response to the urgent need to combat AMR and reduce the environmental footprint of traditional farming practices. Probiotics, prebiotics, vaccines, phage therapy, medicinal plant extracts, enzymes, and antimicrobial peptides all represent promising tools in this effort. These strategies help maintain animal health, improve productivity, and reduce dependency on antibiotics, thus offering a sustainable path forward for the agricultural industry. Probiotics and prebiotics represent a promising alternative . Probiotics are live microorganisms, typically beneficial bacteria, which confer health benefits to the host when administered adequately . Several health and nutritional benefits have been observed to be provided to animals by probiotics. They promote animal growth and maturation and increase feed intake, digestibility, and performance . Other benefits include improved health outcomes and immune responses , egg production , meat yield and its quality , and milk composition and its production in ruminants . In turn, prebiotics are compounds that induce the growth or activity of beneficial microorganisms, particularly in the gut . When used together, as symbiotics, they promote gut health by enhancing the balance of gut microbiota, which is crucial for maintaining the immune system’s strength . According to Low et al. , these supplements can enhance animal health, improve feed efficiency, and boost growth without relying on antibiotics. This approach is particularly promising in preventing intestinal infections and supporting overall gut immunity, thereby reducing the need for antibiotic interventions. Gupta et al. suggested that symbiotics can help mitigate the need for antibiotics by boosting the animal’s natural defenses against infections. This dual approach is seen as an effective way to improve productivity and animal welfare without the overuse of antibiotics, particularly in poultry and swine production. Śmiałek et al. used a multispecies probiotic (Lavipan, JHJ, Poland) containing Lactococcus lactis , Carnobacterium divergens , Lactiplantibacillus casei , Lactiplantibacillus plantarum , and Saccharomyces cerevisiae in broiler feeding to effectively reduce contamination of poultry with Campylobacter spp. The use of the probiotic reduced colonization of the chickens’ digestive tract and reduced environmental and poultry carcass contamination. In addition, the probiotic supported the poultry’s immune system, improving carcass hygiene parameters and reducing the risk of pathogen transmission in the food chain. The results of the presented research highlight the potential of probiotics as an alternative to antibiotics in poultry farming, supporting sustainable agricultural practices and food safety . Future research should focus on multi-strain probiotic formulations tailored to specific livestock species and regional conditions. Advances in genetic engineering could lead to probiotics with enhanced functionalities, such as targeted pathogen inhibition or increased gut resilience . Bacteriophages (phages) are emerging as an innovative and natural alternative to traditional antibiotics, particularly in the battle against multidrug-resistant (MDR) bacteria. These viruses specifically infect and lyse bacterial cells, with a high degree of host specificity, making them valuable tools for targeting pathogenic bacteria without disrupting beneficial microbiota . In agriculture, phages are being explored for controlling bacterial infections in livestock and crops, offering environmentally friendly solutions. They can be administered via water, feed, or directly to infected plants or animals, making them versatile agents in sustainable farming systems . Recent advancements include genetically engineered phages and phage-derived enzymes like lysins, which significantly enhance antibacterial efficacy by breaking down bacterial cell walls. Such innovations have shown promise not only in agriculture but also in clinical settings for wound care and biofilm eradication, where MDR pathogens pose severe threats . Phage–antibiotic synergy (PAS) is another area of growing interest, where the combination of phages and sub-lethal doses of antibiotics enhances bacterial clearance while reducing the likelihood of resistance development . Phage therapy’s specificity is particularly advantageous in addressing biofilms, which are notoriously resistant to antibiotics. Phage cocktails, designed to target multiple bacterial strains, have shown substantial efficacy in disrupting biofilms in healthcare settings . Additionally, bacteriophages offer a unique potential for antivirulence strategies, where phage-induced bacterial resistance may simultaneously reduce bacterial fitness and virulence, further attenuating infections . Despite their vast potential, challenges persist. Regulatory barriers, the need for standardized safety profiles, and the risk of phage resistance require further research and policy development . Nevertheless, with advancements in genetic engineering and better understanding of phage biology, bacteriophages hold immense promise as versatile and sustainable alternatives to antibiotics in diverse applications. Natural compounds play a pivotal role in addressing the global challenge of antimicrobial resistance (AMR), as demonstrated by their diverse mechanisms of action and potential benefits widely discussed in the scientific literature. The use of natural compounds in combating antibiotic resistance is widely discussed in the scientific literature, demonstrating their various mechanisms of action and potential benefits. For example, polyphenolic compounds such as curcumin, resveratrol, and gallic acid can act as photosensitizers in photodynamic therapy, effectively destroying bacterial biofilms and aiding in the treatment of infections . Marine-derived products, on the other hand, offer unique chemical structures that can be effective against multidrug-resistant bacteria . Phytogenic compounds derived from medicinal plants, including essential oils, alkaloids, and phenolic compounds, have gained traction for their antimicrobial, antioxidant, and anti-inflammatory properties. These plant-based alternatives include essential oils, alkaloids, and phenolic compounds, which possess antimicrobial, antioxidant, and anti-inflammatory properties. Gao et al. explained that phenolic compounds from medicinal plants can inhibit bacterial growth and modulate the gut microbiome in animals, thus supporting health and growth. These natural extracts are also being studied for their role in enhancing the animal immune system, which further reduces the need for antibiotics . Phytogenic is seen as a sustainable alternative that can improve both animal welfare and productivity. The use of phytotherapeutics has pointed to their bactericidal properties and ability to reverse drug resistance, although challenges such as overexploitation of resources and climate impacts limit their wider use . Another innovative approach involves essential oils (EOs), which show multifaceted bactericidal activity and potential as coatings in me-too devices to prevent infections, highlighting their versatility and efficacy compared to synthetic antibiotics . Moreover, molecular docking studies of plant-derived compounds against specific pathogenic targets illustrate their untapped potential in combating protozoan and bacterial resistance . Plant extracts and secondary metabolites, such as terpenoids or alkaloids, also show promising antimicrobial activity, as detailed in reviews of their use as bioactive food preservatives and potential therapeutic candidates . Another promising approach is the use of ribosomal antimicrobial peptides (AMPs), which disrupt bacterial processes and serve as a potential alternative to conventional antibiotics . AMPs are known for their multifunctional role in disrupting bacterial processes, offering a promising alternative to conventional antibiotics . AMPs, along with enzymes like lysozymes, can be incorporated into animal feed to reduce pathogenic bacteria in the gut and improve growth performance while avoiding resistance development . Enzymes and antimicrobial peptides also show great potential as alternatives to antibiotics. Enzymes, such as proteases and lysozymes, help break down microbial cell walls, while AMPs are small proteins found naturally in many organisms that exhibit broad-spectrum antimicrobial activity. Wang et al. emphasized that these compounds can be incorporated into animal feed to reduce pathogenic bacteria in the gut and improve the overall growth performance of animals. Synthetic AMPs offer a natural, non-toxic method of reducing pathogen loads without leading to resistance, making them an ideal candidate for replacing antibiotics in animal production systems . Zhang et al. highlighted synthetic AMPs as a promising advancement, combining stability with cost-effectiveness. In addition, natural products such as antimicrobial peptides and fungal-derived compounds offer new opportunities to modulate multidrug resistance . It is also important to consider biotechnological modifications of natural sources to increase their availability and effectiveness . Research on nano-antioxidants and phage therapy as additional methods to combat AMR is also groundbreaking . The past successes of naturally derived antibiotics underscore the importance of integrating traditional knowledge with modern research methods . All this evidence points to the crucial role of natural products in the development of future antimicrobial therapies. The research also observes the design of vaccines with the specific purpose of minimizing antibiotic resistance for specific groups of microorganisms. Śmiałek et al. indicated that the use of a live attenuated vaccine against E. coli can effectively reduce the use of antibiotics in broiler breeding. The use of the vaccine showed a significant reduction in the number of multi-resistant E. coli strains, increasing their sensitivity to antibiotics. At the same time, vaccinated broilers showed better production parameters, such as faster weight gain and lower mortality, and the vaccination did not adversely affect the effectiveness of other vaccines. The results suggest that the routine use of E. coli vaccine in immunoprophylaxis programs can help improve flock health, reduce the risk of antibiotic resistance, and improve production performance, which is crucial for sustainable poultry farming management . One innovative solution is the use of nanoparticles (NPs), which exhibit antibacterial properties, raising hopes for their use in the fight against drug-resistant pathogens . Thanks to their properties, they not only have antibacterial effects themselves, but can also be carriers for antibiotics and natural antimicrobial compounds . Examples of such nanoparticles include Ag-NP, Zn-NP, Au-NP, Al-NP, Cu-NP, and Ti-NP, and metal oxide nanoparticles such as ZnO-NP, CdO-NP, CuO-NP, and TiO 2 -NP, among others. All these structures have shown effectiveness in destroying bacteria . A study by Joost et al. confirmed that treatment with TiO 2 nanoparticles can lead to an increase in the volume of bacterial cells, causing damage to their cell membranes and death. They have also been shown to be effective against multidrug-resistant (MDR) pathogens such as E. coli , K. pneumoniae , Pseudomonas aeruginosa , Acinetobacter baumannii , methicillin-resistant S. aureus , and E. faecalis . The mechanism involves the generation of reactive oxygen species (ROS), which leads to oxidative stress in pathogen cells . Nanoparticles are also being explored as carriers for antibiotics to increase the effectiveness of therapy and minimize the risk of developing bacterial resistance . The conjugation of antibiotics, such as ampicillin, kanamycin, or streptomycin, with gold NPs has achieved lower minimum inhibitory concentrations against Gram-positive and Gram-negative bacteria than with the drugs used alone . Similarly, vancomycin-loaded gold nanoparticles showed enhanced efficacy against strains resistant to this antibiotic by disrupting the stability of bacterial cell membranes . Studies have also shown that bimetallic nanoparticles, such as combinations of two different metals, are more effective than their monometallic counterparts . They have better electron, optical, and catalytic properties, which translates into many times greater efficacy against MDR pathogens while reducing the required therapeutic dose The growing focus on alternatives to antibiotics in agriculture and animal husbandry is a response to the urgent need to combat AMR and reduce the environmental footprint of traditional farming practices. Probiotics, prebiotics, vaccines, phage therapy, medicinal plant extracts, enzymes, and antimicrobial peptides all represent promising tools in this effort. These strategies help maintain animal health, improve productivity, and reduce dependency on antibiotics, thus offering a sustainable path forward for the agricultural industry. Antimicrobial resistance in meat and meat products in Poland presents several challenges for public health, food safety, and environmental sustainability that require a more critical and coordinated approach. In Poland, the increasing prevalence of antibiotic-resistant bacteria in meat and meat products underscores the critical need for effective strategies to mitigate the spread of resistance. Microorganisms such as Campylobacter spp., Staphylococcus spp., Enterococcus spp., L. monocytogenes , and Enterobacterales (including Salmonella spp. and E. coli ) are commonly found in animal farming environments and food products, often exhibiting resistance to multiple classes of antibiotics. Current data on AMR are limited to isolated studies, with a lack of comprehensive nationwide surveillance, which hampers our understanding of resistance patterns across different regions and food products. The cited research results highlight the critical need for a multifaceted approach to antimicrobial resistance management in Poland, including stricter controls on antibiotic use in animal husbandry, improved monitoring of resistance patterns and the promotion of alternative strategies to reduce antibiotic dependence. Additionally, inconsistent application of monitoring systems and weak regulatory enforcement on antibiotic usage in livestock production contribute to the persistence of AMR. The environmental impact of farming practices, particularly the contamination of soil and water with resistant bacteria and genes, remains under-researched but is likely a significant pathway for the spread of AMR. To address these issues, future efforts must focus on establishing a standardized, nationwide surveillance system for monitoring both antibiotic usage and resistance in livestock. Moreover, further research is needed to understand the environmental persistence of AMR, particularly in regions with intensive farming operations. There is also a growing need for alternatives to antibiotics, such as probiotics, phage therapy, and antimicrobial peptides, to reduce dependency on traditional antibiotics in agriculture. Strengthening regulatory frameworks, improving compliance with EU standards, and raising awareness about the risks of AMR among farmers and veterinarians will be crucial. By focusing on these areas, Poland can make significant progress in controlling the spread of AMR in its food systems and protecting public health and the environment. |
Mass Spectrometry–Based Proteomics in Clinical Diagnosis of Amyloidosis and Multiple Myeloma: A Review (2012–2024) | 807cb28a-0622-4d64-8107-6233a718522a | 11836596 | Biochemistry[mh] | Introduction Early detection and identification of pathological conditions associated with protein disorders in patients in the hospital setting represents a new trend in the diagnosis of various diseases. However, the identification of pathological biomarkers, that is, specific disease‐causing proteins, is extremely difficult, time consuming and costly. Proteins are not only biomarkers of these diseases, but their misfolded forms are often themselves the cause of pathological problems . For example, amyloidosis, which is caused by the accumulation of misfolded proteins in various organ tissues, may be influenced by posttranslational modifications (PTMs). However, it is not definitively proven that PTMs are required for a protein sequence to form amyloids. Other diseases that can be caused by PTMs include Alzheimer's disease, Parkinson's disease, or cystic fibrosis. Protein pathology can be caused by PTMs, which involve changes in the amino acid side chains of proteins. There are more than 400 types of these modifications, the most common being phosphorylation, acetylation, N‐glycosylation, amidation, and many others . Although these modifications are essential for the proper functioning of proteins, their disruption or excessive PTMs can lead to protein misfolding and aggregation, causing the diseases mentioned above . Proteomics in the hospital setting is a multidisciplinary field that overlaps significantly with various medical specialties. Cardiac surgery departments, which provide heart samples, internal medicine departments, which provide kidney or adipose tissue samples, and hemato‐oncology departments, which specialise in blood sampling, collaborate in the diagnosis of disease. The development of modern analytical methods such as mass spectrometry (MS) techniques is taking the importance of proteomics in hospitals to a new level. Routine techniques for the determination of specific proteins of given diseases are mainly immunochemical or electrophoretic methods, but the use of these methods can be challenging in the diagnosis of some diseases. A typical example is multiple myeloma; this disease is characterized by proliferation of plasma cells in the bone marrow, leading to overproduction of nonfunctional immunoglobulins. The analyte of interest here is M‐protein (myeloma protein), a key biomarker of MM. Determination of this protein is particularly important to quantify the protein in the blood, which reflects progression and helps to monitor treatment. In addition, monitoring of minimal residual disease (MRD) is useful. Routinely, is this protein determined from blood using electrophoretic methods? Although electrophoresis is a sensitive method, significant interferences can occur in this case. This is due to interference with monoclonal antibodies used in the treatment of MM, which can lead to false positive results . Despite the great advantages that electrophoretic methods provide, they cannot fully cover the need for MRD monitoring. The diagnosis of amyloidosis can also be mentioned. As previously stated, it is a disease in which misfolded proteins are deposited in tissues, resulting in organ dysfunction. Detection of this disease is performed at pathology institutes by Congo red staining and observation of green plaques under polarized light, which is the gold standard for routine diagnosis of this disease. This type of detection can be further supplemented by immunohistochemical (IHC) examination of the exact type of amyloid, but this additional determination may not be completely specific , and only 76% of amyloid types are correctly identified by IHC. However, accurate identification of the specific type of amyloidosis is critical for making treatment decisions and predicting disease progression. Approximately 30 amyloid proteins have been documented in the literature. However, according to the article by Dasari et al. , out of nearly 16 000 human tissue and fat samples, 58.99% were identified as the immunoglobulin light chain type (AL). Amyloid transthyretin (ATTR) was the second most common, with an incidence of 28.44%. Published data suggest that these two types account for 90% of all amyloidosis cases. Table lists the clinical parameters of the most frequent amyloid proteins, as well as parameters related to myeloid M‐protein. Human blood contains a wide range of proteins, with a concentration of approximately 60–80 g/L. In the plasma, which is the liquid part of the blood, about 60% of the total protein content is albumin, 35% consists of various types of globulins, and the remainder is fibrinogen. Overall, plasma contains around 10 000 different proteins, with concentrations spanning a broad range of about 12 orders of magnitude. If amyloid proteins proliferate in the tissues, they are also washed into the bloodstream, where the concentration of amyloid protein can in some cases be as high as 1 g/L in the case of AL or 0.1 g/L in the case of serum amyloidosis . The concentration of proteins such as transthyretin (TTR) does not increase dramatically during ATTR amyloidosis. In this type, mutation and formation of amyloid fibrils predominantly occur, rather than its overproduction. In this regard, we are able to detect its presence; however, we are unable to determine the site of occurrence in the tissues. In the context of pathological conditions associated with protein deposition, simple, specific and sensitive techniques for their determination are becoming increasingly important. In routine clinical diagnostics in healthcare facilities, MS is gaining importance because it can not only identify but also quantify the individual structural forms of these proteins with minimal interference compared to the previously used analytical techniques. The present review provides a comprehensive overview of clinically important proteins analyzed by MS in clinical laboratories published between 2012 and 2024. This review describes clinically relevant pathological proteins in different biological matrices. Proteomic approaches for the identification and quantification of amyloid and multiple myeloma using liquid chromatography combined with MS detection are discussed. Introduction to Proteomic MS MS has become an indispensable tool in modern proteomics, providing high sensitivity, specificity and detailed structural information about proteins. Its applications extend far beyond proteomic analysis alone, making it a cornerstone technology in several areas of biological and clinical research. The widespread use of MS in proteomics is primarily due to its exceptional sensitivity and specificity, allowing the detection and identification of proteins at very low concentrations, making it invaluable in proteomics and clinical research This capability is enhanced by advanced technologies like tandem mass spectrometry (MS/MS) . This is particularly important in clinical diagnostics, such as the measurement of m‐protein concentrations in blood for the diagnosis of multiple myeloma and the monitoring of MRD . In addition, MS is a powerful tool for structural analysis of proteins in proteomics due to its ability to distinguish molecules based on their mass‐to‐charge ratio (m/z) and fragmentation patterns (MSn). Fragmentation provides unique information about protein structure, including amino acid sequence. Ion mobility spectrometry (IMS) adds another dimension by analyzing ion conformation via collision cross section (CCS), which helps distinguish isobaric and conformationally different proteins. Isotopic resolution improves the accuracy of molecular weight determination and can distinguish between isotopic forms of proteins. Spectral libraries allow protein identification by comparing experimental spectra with known reference data. Techniques such as electron capture dissociation (ECD) and electron transfer dissociation (ETD) preserve PTMs during fragmentation, making them critical for the analysis of large proteins and modified forms. Native mass spectrometry (Native MS) is used to study proteins in their native state, revealing protein–protein and protein–lipid interactions and providing insight into quaternary structures and functional complexes. Together, these advanced techniques provide a comprehensive understanding of protein structure and function in biological systems .A key advantage of MS is its ability to simultaneously identify and quantify proteins in complex biological mixtures. This capability is essential for large‐scale proteomic studies where thousands of proteins need to be analyzed and their abundances compared across samples . Compared to traditional protein measurement methods used in clinical biochemistry laboratories, such as electrophoretic and immunochemical techniques, MS offers several advantages. Electrophoretic methods have limited sensitivity for low abundance proteins and suffer from low reproducibility. Immunochemical methods, while highly sensitive and specific (e.g., ELISA), have limited multiplexing capabilities, require specific antibodies for each target protein and cannot measure PTMs . MS offers several unique capabilities not offered by traditional methods. It allows the simultaneous analysis of thousands of proteins, facilitating complex studies of proteomic changes in biological systems. MS also allows detailed characterization of PTMs, which is essential for understanding the regulatory mechanisms of protein function . In addition, MS can achieve highly accurate quantification using isotope‐labeled standards, providing absolute concentrations of proteins and peptides in samples . Another significant advantage is the ability to identify novel proteins and peptides without the need for specific antibodies, which is critical for the discovery of new biomarkers and therapeutic targets. However, proteomic analysis using MS faces challenges, particularly with regard to the variability of patient proteomes. Diseases such as multiple myeloma exhibit significant heterogeneity, meaning that each patient sample has a unique protein profile that can be further complicated by mutations. This variability complicates the standardization of MS‐based methods, requiring an individualized approach to sample analysis and a focus on absolute quantification for accurate diagnosis . In contrast, amyloid measurements do not involve such extensive variability, allowing for better standardization of methods, although accurate identification of amyloid type remains critical. Despite these challenges, MS has become an essential tool in modern proteomics. Its ability to provide detailed structural information and highly sensitive and accurate analyses of proteins and other biomolecules makes it invaluable for advanced proteomic studies, clinical diagnostics, and a wide range of other applications in biology and medicine. MS is revolutionizing the study and understanding of complex biological systems. Biological Matrices in Proteomic Analysis of Multiple Myeloma and Amyloidosis 3.1 Multiple Myeloma In multiple myeloma, blood serum, plasma and urine are the most commonly used matrices for the detection and quantification of M‐protein, a key biochemical marker of this disease . Blood serum is particularly important for the detection of M‐protein by MS, as shown in a study by Dunphy et al. comparing proteomic changes in extramedullary and intramedullary myeloma. In clinical practice, serum M‐protein concentrations can range from a few mg/L to tens of grams per liter (g/L) in advanced cases. For example, serum M‐protein concentrations above 30 g/L typically indicate active multiple myeloma. A study by Chanukuppa that combined serum and bone marrow analysis identified 279 and 116 differentially expressed proteins in bone marrow and serum, respectively, highlighting the importance of these matrices for diagnosis and monitoring of disease progression. Another important sample is bone marrow, which is the primary site of tumor growth in multiple myeloma. Proteomic analysis of bone marrow allows the identification of pathological proteins and clonal plasma cells, which is critical for assessing disease progression and treatment response. M‐protein concentrations in bone marrow can be higher than in blood, depending on the number of clonal cells and the degree of bone marrow infiltration. In addition to these traditional matrices, other biological samples have been explored that offer new possibilities for the diagnosis and monitoring of multiple myeloma. For example, saliva is being investigated as a noninvasive biofluid for detecting various diseases, including multiple myeloma. Although this is a still developing field, studies suggest that specific proteins and peptides in saliva could potentially serve as biomarkers for systemic diseases such as multiple myeloma. However, the clinical implementation of saliva‐based diagnosis of multiple myeloma is still in the research phase, and further validation is needed to develop reliable diagnostic tests for this disease using saliva samples . The kidneys play an important role in the detection of M‐protein, especially in cases where multiple myeloma has caused kidney damage, known as myeloma kidney. High levels of M‐protein in the urine, known as Bence‐Jones proteinuria, can reach several hundred mg/L or more in advanced disease. The presence of M‐protein in the kidneys can lead to significant functional impairment, often detected by decreased glomerular filtration or the presence of protein in the urine. 3.1.1 Amyloidosis Tissue samples play a critical role in amyloidosis, especially biopsies of organs affected by amyloid deposition. The most commonly used tissues are adipose tissue, myocardium, kidney, liver, and peripheral nerves. These tissue samples are often formalin‐fixed and paraffin‐embedded (FFPE) for histologic and proteomic analysis . FFPE blocks allow the detection and typing of amyloid, which is critical for determining appropriate treatment. Amyloid protein concentrations in these tissues can be difficult to measure due to their low levels and the presence of other abundant proteins. The article describes blood levels of β‐amyloid in units of pg/mL, which makes its detection challenging. Another potential matrix is cerebrospinal fluid (CSF), which has been studied in relation to neurological forms of amyloidosis. CSF may contain proteins and peptides related to amyloidogenesis, potentially providing new diagnostic tools for patients suspected of having central nervous system amyloidosis. However, the detection of amyloid proteins in CSF can be challenging due to their typically low concentrations, which are in the hundreds of pg/mL. In some diseases, there may also be a natural depletion, for example, in Alzhemier disease, where analyte Aβ‐42 is converted into plaques and its concentration in CSF is even more reduced . The heart is another organ where amyloid proteins can accumulate, leading to cardiac amyloidosis. Amyloid protein levels in the heart can be high, especially in transthyretin amyloidosis (ATTR). Significant amounts of amyloid can be detected in heart biopsies, leading to myocardial stiffness, diastolic dysfunction, and heart failure. Quantification of amyloid proteins in the heart is typically performed by immunohistochemistry (IHC) or MS. Although peripheral blood is a less invasive alternative to tissue biopsy, it has limited diagnostic value because it cannot determine which organ is affected by amyloidosis. For example, blood levels of amyloid proteins are often low, making them difficult to detect. For example, serum amyloid A (SAA) levels can rise to several mg/L during inflammation or amyloidosis but may be less than 10 mg/L in healthy individuals. Therefore, adipose tissue aspiration is considered an optimal solution that offers a compromise between invasiveness and diagnostic value. This approach allows the collection of a representative sample for MS analysis that can confirm the presence of amyloid and accurately determine its type, which is critical for accurate diagnosis and therapy . Multiple Myeloma In multiple myeloma, blood serum, plasma and urine are the most commonly used matrices for the detection and quantification of M‐protein, a key biochemical marker of this disease . Blood serum is particularly important for the detection of M‐protein by MS, as shown in a study by Dunphy et al. comparing proteomic changes in extramedullary and intramedullary myeloma. In clinical practice, serum M‐protein concentrations can range from a few mg/L to tens of grams per liter (g/L) in advanced cases. For example, serum M‐protein concentrations above 30 g/L typically indicate active multiple myeloma. A study by Chanukuppa that combined serum and bone marrow analysis identified 279 and 116 differentially expressed proteins in bone marrow and serum, respectively, highlighting the importance of these matrices for diagnosis and monitoring of disease progression. Another important sample is bone marrow, which is the primary site of tumor growth in multiple myeloma. Proteomic analysis of bone marrow allows the identification of pathological proteins and clonal plasma cells, which is critical for assessing disease progression and treatment response. M‐protein concentrations in bone marrow can be higher than in blood, depending on the number of clonal cells and the degree of bone marrow infiltration. In addition to these traditional matrices, other biological samples have been explored that offer new possibilities for the diagnosis and monitoring of multiple myeloma. For example, saliva is being investigated as a noninvasive biofluid for detecting various diseases, including multiple myeloma. Although this is a still developing field, studies suggest that specific proteins and peptides in saliva could potentially serve as biomarkers for systemic diseases such as multiple myeloma. However, the clinical implementation of saliva‐based diagnosis of multiple myeloma is still in the research phase, and further validation is needed to develop reliable diagnostic tests for this disease using saliva samples . The kidneys play an important role in the detection of M‐protein, especially in cases where multiple myeloma has caused kidney damage, known as myeloma kidney. High levels of M‐protein in the urine, known as Bence‐Jones proteinuria, can reach several hundred mg/L or more in advanced disease. The presence of M‐protein in the kidneys can lead to significant functional impairment, often detected by decreased glomerular filtration or the presence of protein in the urine. 3.1.1 Amyloidosis Tissue samples play a critical role in amyloidosis, especially biopsies of organs affected by amyloid deposition. The most commonly used tissues are adipose tissue, myocardium, kidney, liver, and peripheral nerves. These tissue samples are often formalin‐fixed and paraffin‐embedded (FFPE) for histologic and proteomic analysis . FFPE blocks allow the detection and typing of amyloid, which is critical for determining appropriate treatment. Amyloid protein concentrations in these tissues can be difficult to measure due to their low levels and the presence of other abundant proteins. The article describes blood levels of β‐amyloid in units of pg/mL, which makes its detection challenging. Another potential matrix is cerebrospinal fluid (CSF), which has been studied in relation to neurological forms of amyloidosis. CSF may contain proteins and peptides related to amyloidogenesis, potentially providing new diagnostic tools for patients suspected of having central nervous system amyloidosis. However, the detection of amyloid proteins in CSF can be challenging due to their typically low concentrations, which are in the hundreds of pg/mL. In some diseases, there may also be a natural depletion, for example, in Alzhemier disease, where analyte Aβ‐42 is converted into plaques and its concentration in CSF is even more reduced . The heart is another organ where amyloid proteins can accumulate, leading to cardiac amyloidosis. Amyloid protein levels in the heart can be high, especially in transthyretin amyloidosis (ATTR). Significant amounts of amyloid can be detected in heart biopsies, leading to myocardial stiffness, diastolic dysfunction, and heart failure. Quantification of amyloid proteins in the heart is typically performed by immunohistochemistry (IHC) or MS. Although peripheral blood is a less invasive alternative to tissue biopsy, it has limited diagnostic value because it cannot determine which organ is affected by amyloidosis. For example, blood levels of amyloid proteins are often low, making them difficult to detect. For example, serum amyloid A (SAA) levels can rise to several mg/L during inflammation or amyloidosis but may be less than 10 mg/L in healthy individuals. Therefore, adipose tissue aspiration is considered an optimal solution that offers a compromise between invasiveness and diagnostic value. This approach allows the collection of a representative sample for MS analysis that can confirm the presence of amyloid and accurately determine its type, which is critical for accurate diagnosis and therapy . Amyloidosis Tissue samples play a critical role in amyloidosis, especially biopsies of organs affected by amyloid deposition. The most commonly used tissues are adipose tissue, myocardium, kidney, liver, and peripheral nerves. These tissue samples are often formalin‐fixed and paraffin‐embedded (FFPE) for histologic and proteomic analysis . FFPE blocks allow the detection and typing of amyloid, which is critical for determining appropriate treatment. Amyloid protein concentrations in these tissues can be difficult to measure due to their low levels and the presence of other abundant proteins. The article describes blood levels of β‐amyloid in units of pg/mL, which makes its detection challenging. Another potential matrix is cerebrospinal fluid (CSF), which has been studied in relation to neurological forms of amyloidosis. CSF may contain proteins and peptides related to amyloidogenesis, potentially providing new diagnostic tools for patients suspected of having central nervous system amyloidosis. However, the detection of amyloid proteins in CSF can be challenging due to their typically low concentrations, which are in the hundreds of pg/mL. In some diseases, there may also be a natural depletion, for example, in Alzhemier disease, where analyte Aβ‐42 is converted into plaques and its concentration in CSF is even more reduced . The heart is another organ where amyloid proteins can accumulate, leading to cardiac amyloidosis. Amyloid protein levels in the heart can be high, especially in transthyretin amyloidosis (ATTR). Significant amounts of amyloid can be detected in heart biopsies, leading to myocardial stiffness, diastolic dysfunction, and heart failure. Quantification of amyloid proteins in the heart is typically performed by immunohistochemistry (IHC) or MS. Although peripheral blood is a less invasive alternative to tissue biopsy, it has limited diagnostic value because it cannot determine which organ is affected by amyloidosis. For example, blood levels of amyloid proteins are often low, making them difficult to detect. For example, serum amyloid A (SAA) levels can rise to several mg/L during inflammation or amyloidosis but may be less than 10 mg/L in healthy individuals. Therefore, adipose tissue aspiration is considered an optimal solution that offers a compromise between invasiveness and diagnostic value. This approach allows the collection of a representative sample for MS analysis that can confirm the presence of amyloid and accurately determine its type, which is critical for accurate diagnosis and therapy . Proteomic Strategies in Amyloidosis and Multiple Myeloma The concentration of proteins in human biological matrices varies widely depending on the type of tissue, its function, and the conditions associated with a particular disease. The complexity of these analyses is compounded by the diverse nature of biological matrices, which require meticulous preparation to ensure accurate protein extraction and minimal interference from matrix components. Because biological samples are often difficult to analyze due to matrix complexity, sample preparation is an essential part of the analytical procedure. 4.1 Proteomic Approaches in Amyloid Analysis In the field of amyloid analysis, many papers focus primarily on the analysis of these proteins from organ tissues and less frequently from adipose tissue. The focus on the analysis of FFPE blocks was reported in seven articles , with the exception of the article by Dasari et al. , which also focuses on the analysis of adipose tissue. The analysis of adipose tissue is very complicated and therefore there are not many publications on this topic. However, the authors were able to identify amyloid proteins even in such a complex matrix. In most cases, they identified ALκ (amyloid light kappa chain), ALλ (amyloid light lambda chain), AA (amyloid type A), and ATTR (transthyretin amyloid). There is also a focus on native tissue without fixation . One paper dealt with the identification of amyloid proteins from CSF . The authors were able to detect different types of amyloid proteins using the laser microdissection (LMD) technique in connection with MS detection (see Table ). In almost all articles, the authors focused only on the identification of the type of amyloidosis proteins. However, one of the papers also dealt with quantification . In this article, the authors analyzed β‐amyloid‐related analytes, mainly Aβ1–38, Aβ1–40, and Aβ1–42 in CSF. The authors used isotope‐labeled internal standards of the analytes (15N51‐Aβ1–38, 15N53‐Aβ1–40, and 15N55‐Aβ1–42) to ensure accurate quantification. By using these isotope‐labeled standards, the authors were able to minimize the risks associated with sample preparation errors and largely eliminate matrix effects, thus ensuring high precision and reliability of the measurements. Accurate quantification of these peptides allows not only diagnosis but also monitoring of disease progression or treatment efficiency. Regarding the samples, the largest number was analyzed in the publication , with a total of 16 175 samples over 10 years (2008–2018). Of these, 58.99% were identified as light chain amyloidosis and 28.44% as ATTR amyloidosis. Over the years, the authors of this study identified many other less common amyloid proteins (see Table ), offering comprehensive insights into amyloid subtypes. As far as sample preparation was stated, microdissected tissue from FFPE blocks was always incubated in 35 μL of 10 mM Tris + 1 mM EDTA and 0.002% Zwittergent buffer at 98°C for 90 min Authors of articles focusing on FFPE tissue blocks agree on this procedure. This method has the advantage of efficiently breaking down cross‐links formed during the fixation process, providing better access to proteins for downstream analysis. However, the reliance on high temperatures and chemical buffers could result in some protein degradation. The next steps vary from author to author, but the final step is always overnight trypsin digestion at 37°C, which is a standard approach for bottom‐up proteomics. Although this method provides effective protein digestion, the existence of some more resistant proteins may potentially affect the accuracy of protein quantification due to incomplete digestion. In addition, any residual cross‐linking could prevent complete degradation of proteins, especially in highly fixed tissues. In contrast, sample preparation of adipose tissue, its preparation is not very standardized, and different approaches can be used. For example, in one paper , the tissue was defatted in several steps including soaking in acetone and air drying. This preparation reduces lipid interference but is time consuming and can lead to protein loss. Another paper described simpler approach of rinsing with isotonic solution and followed by maceration in 100 μL buffer (composition in Table ), followed by subsequent preparation with centrifugation and dialysis steps, ending with trypsin digestion, which is also found in other articles using adipose tissue . Although this method was faster, this method risks incomplete removal of lipids, it may lead to problems such as ion suppression in MS and reduced detection sensitivity for proteins. An article presented whole protocol in the supplement, where the authors used maceration in a buffer consisting of 7 M urea + 2 M thiourea + 4% CHAPS + 65 mM DTT. This was followed by several centrifugation steps, ending with dialysis against ammonium bicarbonate. The resulting extract was then digested with trypsin. Maceration in the above buffer is particularly effective in preparing protein samples for proteomic analysis by ensuring that proteins are completely denatured, solubilized, and reduced. The buffer facilitates efficient extraction and processing of proteins from complex biological samples, enabling high‐quality proteomic data. Proteomic analysis can be performed using two basic approaches: bottom‐up and top‐down. Tables and gives an overview of published papers for the analysis of amyloid proteins and M‐proteins from biological matrix. Bottom‐up approach was used in paper , and top‐down approach was used in only one paper . Bottom‐up proteomics is the most commonly used method, especially when working with complex samples such as FFPE blocks , native tissues , and adipose tissue . This method involves the enzymatic digestion of proteins into smaller peptides, allowing detailed and sensitive analysis of highly complex protein mixtures. Its main advantages are higher sensitivity and the ability to analyze a wide range of proteins simultaneously. However, this approach can lose information about the overall structure of proteins and PTMs, which can be a limitation for studying certain aspects of proteins (Table ). On the other hand, top‐down proteomics focuses on the analysis of intact proteins, providing direct information about their primary structure, PTMs, and protein variants. Although it provides a more comprehensive view of protein structure, this method is technically more demanding, requires highly specialized equipment and is generally less effective when analyzing highly complex samples such as tissue lysates. In the literature, most authors working with FFPE blocks or adipose tissue use the bottom‐up approach because it is less time consuming and better suited for handling complex samples. Exceptions are studies such as Gonzalez Suarez et al. , which do not provide information on native tissue processing, and Lin et al. , which do not use classical proteomic approaches. Overall, both methods have specific advantages and disadvantages depending on the type of sample and the desired information. Bottom‐up is more suitable for broad detection and quantification of proteins in complex mixtures, whereas top‐down is invaluable for detailed characterization of intact proteins and their modifications. 4.2 Proteomic Approaches in Multiple Myeloma Proteomic approaches in multiple myeloma involve the comprehensive analysis of proteins expressed in this type of cancer, providing insights into disease mechanisms. There are not many articles focusing on the analysis of human blood for this purpose. In clinical practice, blood samples are routinely used to measure MRD in this disease; however, flow cytometry or next‐generation sequencing (NGS) are the main techniques used. NGS offers the highest sensitivity and specificity and is ideal for deep molecular analysis, but it is more expensive and technically and bioinformatically challenging. Flow cytometry is a very efficient, fast, and relatively inexpensive method with high sensitivity and specificity, but it may have limited ability to detect very low levels of MRD. Therefore, efforts are being made to incorporate MS into the routine measurement of this disease because it provides a good compromise between sensitivity and specificity and allows quantitative analysis but is technically more challenging and expensive than flow cytometry. The authors of papers used serum as a matrix for M‐protein analysis, and the authors of paper used plasma, in addition, they also studied bone marrow. Using plasma as a protein‐rich matrix instead of serum allowed the authors to identify distinct protein and metabolite signatures of multiple myeloma. This distinction is crucial, because plasma can provide a more complete protein profile due to the presence of coagulation factors that are removed in serum. However, the use of plasma may introduce additional complexity in sample preparation and variability in results due to clotting factor interference. The authors made extensive use of the MALDI‐TOF combination , which offers high‐throughput capabilities and simplicity in sample preparation, making it suitable for large‐scale studies. However, its resolution and ability to detect low abundant proteins in complex matrices is the main limiting factor that separates this technique from others. On the other hand, we can find other more sensitive tandem arrangements such as the combination of orbitrap . Orbitrap can provide higher mass accuracy and better resolution in this regard, resulting in more detailed analysis of complex matrices, but it requires more sophisticated instrumentation and may be less accessible. Conversely, the combination of quadrupole and linear ion trap used in offers greater flexibility in detecting a wider range of molecular weights but may be more time consuming and require more complex sample preparation. Additionally, Ig‐LC‐MS technique used in article is a method specialized for measuring immunoglobulin light chains, offered targeted insights into specific immunological biomarkers, although it is more focused and may not capture the broader proteomic profile observed with other MS techniques. Some authors also reported limits of detection or quantification for their methods. The article dealing with M‐protein analysis gives LOQs of 1.95–3.52 μg/mL for five out of six patients (patient 6 had a LOQ of 16.3 μg/mL, which the author explains by increased background noise). The author reports an LOD for M‐protein identification of 15 μg/mL, which is a significant decrease in sensitivity compared to the results in article . Tryptic digestion was used for sample preparation ; details of the preparation are given in Table . The bottom‐up approach was considered as a proteomic method mainly due to the use of digestion as described in article . In article , the authors aimed at top‐down proteomics because of the analysis of intact proteins without a digestion step. The analysis of multiple myeloma from blood can be challenging, especially because of the specific protein sequence in patient samples, so it is necessary to sequence each patient separately. For this reason, some papers do not analyze as many samples as the amyloid studies. The largest number of samples was analyzed in article with 585 samples. MS screening showed a total of 66 positive samples. Regarding the sample preparation, the authors have different approaches. For example, the authors used preparation in a PCR plate where all incubation, washing, and reduction took place. This type of preparation was demanding in terms of the number of steps required to prepare the samples. Incubation and reduction of samples took 30 min each. Another paper focused on a less demanding preparation in this respect, using plasma for immunodepletion with immunodepletion resin for 60 min, followed by centrifugation and protein digestion. The final step was acidification with 2% TFA in 20% ACN. In this article, the authors also focused on the use of both plasma and bone marrow samples for M‐protein determination. In the article, it is reported that in bone marrow, they found 225 proteins with a significant differential abundance between bone marrow mononuclear cells, whereas in plasma they found 22 proteins with a significant differential abundance. Proteomic Approaches in Amyloid Analysis In the field of amyloid analysis, many papers focus primarily on the analysis of these proteins from organ tissues and less frequently from adipose tissue. The focus on the analysis of FFPE blocks was reported in seven articles , with the exception of the article by Dasari et al. , which also focuses on the analysis of adipose tissue. The analysis of adipose tissue is very complicated and therefore there are not many publications on this topic. However, the authors were able to identify amyloid proteins even in such a complex matrix. In most cases, they identified ALκ (amyloid light kappa chain), ALλ (amyloid light lambda chain), AA (amyloid type A), and ATTR (transthyretin amyloid). There is also a focus on native tissue without fixation . One paper dealt with the identification of amyloid proteins from CSF . The authors were able to detect different types of amyloid proteins using the laser microdissection (LMD) technique in connection with MS detection (see Table ). In almost all articles, the authors focused only on the identification of the type of amyloidosis proteins. However, one of the papers also dealt with quantification . In this article, the authors analyzed β‐amyloid‐related analytes, mainly Aβ1–38, Aβ1–40, and Aβ1–42 in CSF. The authors used isotope‐labeled internal standards of the analytes (15N51‐Aβ1–38, 15N53‐Aβ1–40, and 15N55‐Aβ1–42) to ensure accurate quantification. By using these isotope‐labeled standards, the authors were able to minimize the risks associated with sample preparation errors and largely eliminate matrix effects, thus ensuring high precision and reliability of the measurements. Accurate quantification of these peptides allows not only diagnosis but also monitoring of disease progression or treatment efficiency. Regarding the samples, the largest number was analyzed in the publication , with a total of 16 175 samples over 10 years (2008–2018). Of these, 58.99% were identified as light chain amyloidosis and 28.44% as ATTR amyloidosis. Over the years, the authors of this study identified many other less common amyloid proteins (see Table ), offering comprehensive insights into amyloid subtypes. As far as sample preparation was stated, microdissected tissue from FFPE blocks was always incubated in 35 μL of 10 mM Tris + 1 mM EDTA and 0.002% Zwittergent buffer at 98°C for 90 min Authors of articles focusing on FFPE tissue blocks agree on this procedure. This method has the advantage of efficiently breaking down cross‐links formed during the fixation process, providing better access to proteins for downstream analysis. However, the reliance on high temperatures and chemical buffers could result in some protein degradation. The next steps vary from author to author, but the final step is always overnight trypsin digestion at 37°C, which is a standard approach for bottom‐up proteomics. Although this method provides effective protein digestion, the existence of some more resistant proteins may potentially affect the accuracy of protein quantification due to incomplete digestion. In addition, any residual cross‐linking could prevent complete degradation of proteins, especially in highly fixed tissues. In contrast, sample preparation of adipose tissue, its preparation is not very standardized, and different approaches can be used. For example, in one paper , the tissue was defatted in several steps including soaking in acetone and air drying. This preparation reduces lipid interference but is time consuming and can lead to protein loss. Another paper described simpler approach of rinsing with isotonic solution and followed by maceration in 100 μL buffer (composition in Table ), followed by subsequent preparation with centrifugation and dialysis steps, ending with trypsin digestion, which is also found in other articles using adipose tissue . Although this method was faster, this method risks incomplete removal of lipids, it may lead to problems such as ion suppression in MS and reduced detection sensitivity for proteins. An article presented whole protocol in the supplement, where the authors used maceration in a buffer consisting of 7 M urea + 2 M thiourea + 4% CHAPS + 65 mM DTT. This was followed by several centrifugation steps, ending with dialysis against ammonium bicarbonate. The resulting extract was then digested with trypsin. Maceration in the above buffer is particularly effective in preparing protein samples for proteomic analysis by ensuring that proteins are completely denatured, solubilized, and reduced. The buffer facilitates efficient extraction and processing of proteins from complex biological samples, enabling high‐quality proteomic data. Proteomic analysis can be performed using two basic approaches: bottom‐up and top‐down. Tables and gives an overview of published papers for the analysis of amyloid proteins and M‐proteins from biological matrix. Bottom‐up approach was used in paper , and top‐down approach was used in only one paper . Bottom‐up proteomics is the most commonly used method, especially when working with complex samples such as FFPE blocks , native tissues , and adipose tissue . This method involves the enzymatic digestion of proteins into smaller peptides, allowing detailed and sensitive analysis of highly complex protein mixtures. Its main advantages are higher sensitivity and the ability to analyze a wide range of proteins simultaneously. However, this approach can lose information about the overall structure of proteins and PTMs, which can be a limitation for studying certain aspects of proteins (Table ). On the other hand, top‐down proteomics focuses on the analysis of intact proteins, providing direct information about their primary structure, PTMs, and protein variants. Although it provides a more comprehensive view of protein structure, this method is technically more demanding, requires highly specialized equipment and is generally less effective when analyzing highly complex samples such as tissue lysates. In the literature, most authors working with FFPE blocks or adipose tissue use the bottom‐up approach because it is less time consuming and better suited for handling complex samples. Exceptions are studies such as Gonzalez Suarez et al. , which do not provide information on native tissue processing, and Lin et al. , which do not use classical proteomic approaches. Overall, both methods have specific advantages and disadvantages depending on the type of sample and the desired information. Bottom‐up is more suitable for broad detection and quantification of proteins in complex mixtures, whereas top‐down is invaluable for detailed characterization of intact proteins and their modifications. Proteomic Approaches in Multiple Myeloma Proteomic approaches in multiple myeloma involve the comprehensive analysis of proteins expressed in this type of cancer, providing insights into disease mechanisms. There are not many articles focusing on the analysis of human blood for this purpose. In clinical practice, blood samples are routinely used to measure MRD in this disease; however, flow cytometry or next‐generation sequencing (NGS) are the main techniques used. NGS offers the highest sensitivity and specificity and is ideal for deep molecular analysis, but it is more expensive and technically and bioinformatically challenging. Flow cytometry is a very efficient, fast, and relatively inexpensive method with high sensitivity and specificity, but it may have limited ability to detect very low levels of MRD. Therefore, efforts are being made to incorporate MS into the routine measurement of this disease because it provides a good compromise between sensitivity and specificity and allows quantitative analysis but is technically more challenging and expensive than flow cytometry. The authors of papers used serum as a matrix for M‐protein analysis, and the authors of paper used plasma, in addition, they also studied bone marrow. Using plasma as a protein‐rich matrix instead of serum allowed the authors to identify distinct protein and metabolite signatures of multiple myeloma. This distinction is crucial, because plasma can provide a more complete protein profile due to the presence of coagulation factors that are removed in serum. However, the use of plasma may introduce additional complexity in sample preparation and variability in results due to clotting factor interference. The authors made extensive use of the MALDI‐TOF combination , which offers high‐throughput capabilities and simplicity in sample preparation, making it suitable for large‐scale studies. However, its resolution and ability to detect low abundant proteins in complex matrices is the main limiting factor that separates this technique from others. On the other hand, we can find other more sensitive tandem arrangements such as the combination of orbitrap . Orbitrap can provide higher mass accuracy and better resolution in this regard, resulting in more detailed analysis of complex matrices, but it requires more sophisticated instrumentation and may be less accessible. Conversely, the combination of quadrupole and linear ion trap used in offers greater flexibility in detecting a wider range of molecular weights but may be more time consuming and require more complex sample preparation. Additionally, Ig‐LC‐MS technique used in article is a method specialized for measuring immunoglobulin light chains, offered targeted insights into specific immunological biomarkers, although it is more focused and may not capture the broader proteomic profile observed with other MS techniques. Some authors also reported limits of detection or quantification for their methods. The article dealing with M‐protein analysis gives LOQs of 1.95–3.52 μg/mL for five out of six patients (patient 6 had a LOQ of 16.3 μg/mL, which the author explains by increased background noise). The author reports an LOD for M‐protein identification of 15 μg/mL, which is a significant decrease in sensitivity compared to the results in article . Tryptic digestion was used for sample preparation ; details of the preparation are given in Table . The bottom‐up approach was considered as a proteomic method mainly due to the use of digestion as described in article . In article , the authors aimed at top‐down proteomics because of the analysis of intact proteins without a digestion step. The analysis of multiple myeloma from blood can be challenging, especially because of the specific protein sequence in patient samples, so it is necessary to sequence each patient separately. For this reason, some papers do not analyze as many samples as the amyloid studies. The largest number of samples was analyzed in article with 585 samples. MS screening showed a total of 66 positive samples. Regarding the sample preparation, the authors have different approaches. For example, the authors used preparation in a PCR plate where all incubation, washing, and reduction took place. This type of preparation was demanding in terms of the number of steps required to prepare the samples. Incubation and reduction of samples took 30 min each. Another paper focused on a less demanding preparation in this respect, using plasma for immunodepletion with immunodepletion resin for 60 min, followed by centrifugation and protein digestion. The final step was acidification with 2% TFA in 20% ACN. In this article, the authors also focused on the use of both plasma and bone marrow samples for M‐protein determination. In the article, it is reported that in bone marrow, they found 225 proteins with a significant differential abundance between bone marrow mononuclear cells, whereas in plasma they found 22 proteins with a significant differential abundance. Trends in Liquid Chromatography for the Analysis of Amyloidosis and Multiple Myeloma Column separation in LC‐MS protein analysis is critical to achieving high accuracy and reliability of results. The choice of columns and sample preparation varies widely in proteomic studies, especially in the analysis of multiple myeloma and amyloidosis. When analyzing complex biological samples containing hundreds to thousands of different proteins, it is essential to effectively separate these proteins into individual fractions, with each peak corresponding to a specific protein or peptide. This separation allows their accurate identification and quantification by MS. In addition, good separation increases the sensitivity of the analysis by concentrating analytes into narrower peaks, making it easier to detect low concentrations of proteins. Conversely, poor separation can lead to overlapping signals in the MS detector, reducing the accuracy and sensitivity of detection and increasing the risk of interferences such as ion suppression. This in turn increases the risk of incorrect protein identification, especially in complex mixtures where similar peptides may elute together. Therefore, a high‐quality chromatographic separation not only ensures better resolution and sensitivity but also improves the reproducibility and accuracy of the analysis, which is essential for the validity of the results. The trend in LC‐MS column chromatography of proteins, with a focus on amyloidosis and multiple myeloma in biological matrices, is to reduce column internal diameters, which requires lower mobile phase flow rates. This results in less sample dilution and consequently increased method sensitivity by concentrating analytes into narrower peaks. This trend is supported by the increasing use of micro‐ and nano‐LC techniques, which operate at flow rates in the micro‐ to nanoliters per minute, rather than the traditional milliliters per minute. Thanks to this technique, the methods benefit from both increased sensitivity and stable ionization, resulting in improved analyte transfer to the mass detector and thus improving overall data quality. These advantages have been well documented in published proteomic studies (Table ) . However, smaller columns and lower flow rates come with some trade‐offs. By reducing the internal diameters of the columns, there is a greater requirement for precise control and optimization of flow to avoid blockages and other associated problems. Shrinking columns provide a great solution in this regard if we do not achieve the required sensitivity; however, they can be further prone to problems associated with carryover, especially when dealing with such complex protein‐rich matrices. Regarding columns for proteomic analysis, different types of columns can be used. One of the most commonly used types are reversed‐phase (RP) columns, which use hydrophobic interactions between the analyte and the stationary phase. This method has been extensively applied in amyloid and multiple myeloma research, with C18 columns featuring in most studies . Due to hydrophobic interactions, RP columns can efficiently separate nonpolar proteins. However, a limitation may be the use of these columns in the separation of inversely hydrophilic analytes, which may subsequently lead to separation losses. This drawback is particularly shown in the paper , which attempts to exploit the use of HILIC columns. It is most likely that separation on C18 columns in particular has been utilized for analysis of MM, especially due to their abovementioned hydrophobic protein–stationary phase interactions . .Another option that can be used are ion‐exchange columns used in multidimensional liquid chromatography, also known as the MudPIT technique. This technique combines the use of ion‐exchange and RP columns in several dimensions. A paper dealing with the analysis of amyloid from adipose tissue used this technique and utilized a preparative SCX ion‐exchange column followed by a C18 separation column. However, the complexity of this technique may limit its wider application due to the increased time and resource requirements compared to simpler methods such as one‐dimensional RP separation. In articles focusing on amyloid typing, different types of trapping columns have been used, for example, Optipak trap filled with Magic C8 beads and C18 Acclaim PepMap Nano Trap . The use of these trapping columns prior to the main separation is crucial for enhancing the efficiency of peptide separation. The Optipak trap packed with Magic C8 beads utilizes moderate hydrophobic interactions, which can be particularly beneficial for capturing medium hydrophobicity peptides. However, the use of this type of precolumn can be tricky, especially with the loss of retention of highly hydrophobic or highly hydrophilic peptides, which can potentially affect resolution. On the other hand, the use of the C18 Acclaim PepMap Nano trapping column offers strong hydrophobic interactions due to the C18 stationary phase and thus makes it more suitable for trapping and concentrating highly hydrophobic peptides, but on the other hand, there is a risk of eluting highly hydrophilic peptides. Necessarily it should be mentioned that also in the analysis of multiple myeloma, there is the use of pre‐columns, namely, PepMap column and PepMap precolumns, known for their robust performance in peptide retention under low flow conditions. However, as with all trapping columns, their efficiency can depend on the specific peptide properties, flow rates, and mobile phase conditions used in the analysis. Unfortunately, with regard to the further use of trapping columns in multiple myeloma analysis, the authors were not very forthcoming. Regarding column size varied, with the most commonly used length being 15 cm , but shorter columns were also found, up to 10 cm in an article dealing with MudPIT analysis from adipose tissue or a 5‐cm HILIC column for analysis of beta‐amyloid from CSF fluid. Shorter columns can reduce analysis time but can also compromise resolution, especially in highly complex proteomic samples where overlapping peaks can make detection impossible. Regarding the internal diameters, the use of NanoFlow‐LC methods has also resulted in a dramatic reduction of internal diameters to 75 μm . This reduction in diameter offers several advantages, particularly in terms of sensitivity. Smaller internal diameters concentrate the analyte in a narrower flow path, resulting into sharper peaks and higher sensitivity, which is critical for detecting low‐abundance proteins in complex biological samples. Particle sizes is another critical parameter, in many studies sizes range from 3 to 5 μm , which is in contrast to classical UHPLC applications using 1.7‐μm particles. The larger particle sizes in nano‐LC methods are necessary to accommodate the low flow rates and smaller column diameters, though they may reduce separation efficiency compared to the smaller particles in UHPLC. The mobile phase is performed under gradient elution for most articles . If the authors stated so, they used mobile phases with water and acetonitrile with additions of formic acid, which supports effective peptide elution and ionization . However, the lack of detailed information about mobile phase conditions in some studies leaves room for potential inconsistencies in comparative analyses. MS In recent decades, MS‐based proteomics has become a key tool for the detailed investigation of molecular mechanisms in several diseases, including amyloidosis , multiple myeloma , Alzheimer's disease , Parkinson's disease , cardiovascular diseases , various types of cancer , diabetes , autoimmune diseases such as rheumatoid arthritis , and neurodegenerative diseases such as Huntington's disease . In addition, MS‐based proteomics has been instrumental in the study of infectious diseases, including HIV/AIDS and hepatitis , and kidney diseases such as glomerulonephritis . Due to its high sensitivity, resolution, and mass accuracy, MS allows the identification and quantification of thousands of proteins in complex biological samples. The combination of liquid chromatography and tandem mass spectrometry (LC‐MS/MS) has become the gold standard for complex proteomic analysis, allowing efficient separation of protein mixtures prior to detection and identification—critical when working with clinical samples. Significant advances in MS instrument configurations—particularly the development of modern high‐ and ultra‐high‐resolution mass spectrometers (HRAM) such as time‐of‐flight (TOF) MS, Fourier transform cyclotron resonance (FT‐ICR) MS, and Orbitrap—have taken proteomics to new heights and facilitated the transition of MS technology from analytical laboratories to clinical practice. Hybrid MS systems such as the Q‐Exactive (combining quadrupole and Orbitrap) and the LTQ‐Orbitrap (integrating linear ion trap, quadrupole, and Orbitrap) are widely used in both research and clinical laboratories. These instruments achieve exceptional performance through advanced modifications in various components, such as ion sources, ion transfer optics, and instrument tuning, which collectively increase sensitivity and detection speed. Optimized signal processing and electronics further enhance their capabilities, enabling both qualitative and quantitative analysis with high accuracy and resolution. These high‐resolution capabilities are critical in proteomics for identifying and quantifying low‐abundance proteins, detecting PTMs, and distinguishing between proteins with very similar masses. In addition, advanced ion optics and improved scan speeds enable faster data acquisition, which is essential when analyzing large numbers of samples or conducting high‐throughput studies in clinical settings. Proteomic analysis primarily employs two complementary strategies. The first aims to comprehensively identify proteins in biological samples, providing a broad view of the proteome. Recent advances in MS and bioinformatics have greatly expanded the scope of this approach, enabling the identification of thousands of proteins in complex biological matrices such as plasma, urine, CSF, and tissue extracts. For example, state‐of‐the‐art MS techniques have facilitated the identification of more than 10 000 proteins in human plasma and more than 3000 proteins in urine . A Q‐Exactive mass spectrometer was used to analyze the proteomic profile of serum from patients with refractory multiple myeloma, and 632 proteins were identified, 52 of which showed significant differences between different patient groups . Dasari et al. reviewed data from 16 175 samples analyzed by MS to subtype 21 amyloid proteins, demonstrating 99% specificity in amyloidosis typing and identifying rare mutations associated with hereditary forms. Chanukuppa et al. used Sequential Window Acquisition of All Theoretical Mass Spectra (SWATH‐MS) and complementary proteomic techniques to identify 279 differentially expressed proteins in bone marrow interstitial fluid and 116 in serum from multiple myeloma patients, revealing potential biomarkers for disease diagnosis and pathophysiology. This approach offers advantages such as high reproducibility, comprehensive proteome coverage, and the ability to analyze complex biological fluids, overcoming the specificity limitations of targeted MS methods and enabling broader biomarker discovery compared to studies focused solely on purified plasma cells. Brambilla et al. demonstrated the application of multidimensional protein identification technology (MudPIT) for proteomic typing of systemic amyloidoses using subcutaneous fat aspirates from 26 patient samples. By combining MS/MS with 2D chromatography, the study successfully identified amyloidogenic proteins—light chains κ (LC‐κ), λ (LC‐λ), TTR, and SAA—as well as associated fibrillogenesis‐related proteins, including clusterin and apolipoprotein E. The MudPIT‐MS analysis compared patient samples with controls to develop a diagnostic algorithm based on normalized abundance ratios (α‐values) for subtype classification. Validation against immunoelectron microscopy confirmed complete agreement in amyloid type assignment. The workflow eliminates the need for tissue fractionation, reduces noise from plasma proteins, and enables automated high‐throughput analysis of complex protein samples. Dunphy et al. used label‐free LC‐MS/MS and targeted metabolomics to analyze bone marrow mononuclear cells (BMNCs) and plasma from multiple myeloma (MM) patients with and without extramedullary spread (EMM). Proteomic analysis identified 225 differentially abundant proteins in BMNCs and 22 in plasma, whereas metabolomics revealed 31 altered metabolites, primarily lipids, in EMM plasma. Key biomarkers, including VCAM1, HGFA, and PEDF, were validated as plasma markers to discriminate EMM from MM, achieving an AUC of 1.0 in ROC analysis. Proteomic pathways highlighted mechanisms such as integrin‐mediated signaling and Rap1 signaling that drive EMM progression. Analyses were performed using Thermo Orbitrap Fusion Tribrid MS for BMNCs, Q‐Exactive MS for plasma, and SCIEX QTRAP 6500plus for metabolomics. This integrative approach provides insight into the molecular basis of EMM and suggests clinically relevant biomarkers for diagnosis. Holub et al. compared IHC with laser microdissection–liquid chromatography–tandem mass spectrometry (LMD‐LC‐MS/MS) for amyloid typing in 22 FFPE tissue samples from 11 patients with systemic amyloidosis. LMD‐LC‐MS/MS accurately identified amyloid subtypes in all samples, outperforming IHC, which was accurate in only 36% of cases and prone to false positives, particularly for TTR and SAA. LMD‐LC‐MS/MS demonstrated superior sensitivity, specificity and reproducibility across tissue types and reliably identified amyloidogenic and associated proteins. Whereas IHC remains cost‐effective and widely used, LMD‐LC‐MS/MS provides a more accurate and comprehensive approach, particularly in cases where IHC results are inconclusive. This study highlights the potential of proteomics to improve the diagnosis of amyloidosis. The second strategy focuses on the targeted identification and quantification of specific proteins that often serve as biomarkers of disease. This targeted approach is critical for clinical applications, as precise measurement of disease‐associated proteins aids in diagnosis, prognosis, and monitoring of treatment response. In the field of amyloidosis, more than 130 potentially clinically relevant amyloidogenic proteins have been identified . However, routine clinical practice typically focuses on a select few, including SAA, TTR, immunoglobulin light chains (κ and λ), and β2‐microglobulin, due to their established roles in disease pathology and management . Similarly, in multiple myeloma, the detection and quantification of monoclonal (M) proteins, including immunoglobulin heavy and light chains, is essential for diagnostic and monitoring purposes . Nevone et al. investigated N‐glycosylation in κ‐type immunoglobulin light chains in AL amyloidosis. The study used the Q‐Exactive spectrometer to identify and analyze specific glycosylated light chain variants that tend to form amyloid fibrils. Specific N‐glycosylation patterns within the FR3 region were identified that contribute to the risk of amyloidosis progression in multiple myeloma patients. Two landmark studies have advanced the development and clinical application of EXENT‐MS, a MALDI‐TOF‐MS‐based technology for the detection and quantification of monoclonal proteins (M‐proteins), highlighting its transformative potential in the noninvasive monitoring of multiple myeloma. In the first study, Kubicki et al. introduced EXENT‐MS as a noninvasive alternative to bone marrow biopsy during maintenance therapy for multiple myeloma. The method proved valuable in assessing measurable residual disease (MRD) and predicting progression‐free survival. With a detection limit of 0.015 g/L, EXENT‐MS accurately quantifies M‐proteins across immunoglobulin isotypes (IgG, IgA, and IgM) and offers rapid processing and ease of implementation compared to more complex methods such as LC‐MS/MS. The integration of automated sample processing ensures standardized and efficient workflows, making it ideal for routine clinical practice. However, its accuracy depends on patient‐specific calibration, as the unique mass spectra of M‐proteins vary according to immunoglobulin isotype and structural mutations. Calibration requires previous patient samples to establish a reference spectrum, which is critical for reliable longitudinal tracking of M‐proteins. Despite this limitation, EXENT‐MS represents a significant innovation in noninvasive diagnostics with the potential to significantly improve clinical care. In the second study, Barnidge et al. compared EXENT‐MS with LC‐MS/MS (Triple‐TOF) for the detection and quantification of M‐proteins in serum samples from patients with suspected or diagnosed multiple myeloma. In addition to M‐proteins, this study analyzed PTMs such as glycosylation to assess protein heterogeneity. Sensitivity was a major focus: Whereas EXENT‐MS achieved a detection limit of 0.015 g/L, sufficient for most clinical applications, LC‐MS/MS demonstrated significantly greater sensitivity, identifying trace protein concentrations below 0.001 g/L. This capability enabled LC‐MS/MS to detect M‐proteins in samples where EXENT‐MS did not, illustrating the complementary strengths of the two technologies. EXENT‐MS is characterized by high throughput and streamlined automation, whereas LC‐MS/MS offers unparalleled sensitivity and detailed molecular characterization. This technology has also been featured in other publications , highlighting its growing role in clinical and research settings. Multi‐method LC‐MS/MS approaches for the quantification of pathological proteins offer significant advantages over single‐method analyses, particularly for monitoring a wide range of diseases. Recent advances by several authors have demonstrated multiplexed methods capable of analyzing tens to hundreds of proteins from complex biological matrices, thereby increasing diagnostic accuracy and throughput . Kuzyk et al. with a method to quantify 45 plasma proteins associated with cardiovascular, cancer, and inflammatory diseases. Using MRM‐based LC‐MS/MS with stable isotope‐labeled peptides, the study achieved attomole‐level LOQs for 27 proteins and CVs below 10% for 44 assays. Targeted MS techniques, such as multiple reaction monitoring (MRM) and parallel reaction monitoring (PRM), have been instrumental in the quantification of low‐abundance proteins in complex biological matrices . These techniques have been successfully applied to quantify amyloidogenic proteins and M‐proteins in clinical samples, significantly improving the accuracy of amyloidosis and multiple myeloma diagnosis . Data acquisition strategies are critical in mass spectrometry‐based proteomics, with data‐dependent acquisition (DDA) and data‐independent acquisition (DIA) representing two dominant approaches . Each offers unique strengths for protein analysis, enabling advances in diagnostics and biological research. DDA is based on selecting the most abundant precursor ions in an initial MS1 scan for subsequent fragmentation and analysis in MS2. This “top‐n” selection ensures high quality spectra for targeted precursors, but is inherently stochastic. As a result, DDA can introduce missing data when precursor ion intensities vary between samples. Its main advantage is that it produces clean MS2 spectra that are well suited for identifying PTMs and performing open or targeted searches. However, its stochastic nature limits its reproducibility, especially in highly complex samples . Kelstrup et al. optimized DDA for shotgun proteomics and identified over 4000 proteins in a 3‐h LC‐MS/MS analysis using a quadrupole Orbitrap mass spectrometer, demonstrating the method's ability to identify proteins at high resolution. Other publications have explored DDA proteomic analysis of proteins in biological matrices . DIA overcomes these limitations by fragmenting all precursor ions within defined m/z windows, providing comprehensive and unbiased data collection. DIA ensures consistent sampling of peptides across samples, significantly reducing missing values and improving reproducibility, especially in large cohort studies. The increased complexity of DIA spectra requires advanced computational tools and spectral libraries for data interpretation. Innovations such as pseudo‐MS2 spectra generation and machine learning–enhanced analysis have mitigated these challenges, enabling reliable quantification of thousands of proteins. Other studies have used DIA to analyze clinically relevant proteins . IMS has revolutionized MS workflows, particularly DDA and DIA methods. By separating ions based on their CCS, IMS introduces an additional dimension of separation that enhances molecular resolution and specificity. This unique capability allows differentiation of isobaric and isomeric ions, addressing critical challenges in proteomic analysis such as spectral overlap and co‐elution. IMS significantly improves both protein identification and quantification by simplifying mass spectra. This simplification leads to better coverage of identified proteins, reproducibility, and sensitivity of analyses. In addition, IMS enables researchers to delve deeper into the structural intricacies of peptides and proteins, including the characterization of PTMs. The integration of IMS into MS workflows has led to significant advances in proteomic research. For example, Jiang et al. demonstrated that IMS doubled the number of quantifiable proteins in plasma samples, identifying more than 1000 proteins compared to approximately 500 using conventional LC‐MS/MS. Similarly, McMillen et al. and Aftab et al. reported improved protein detection and resolution in tryptic peptide and tissue proteomics, respectively. In the field of clinical proteomics, IMS has proven invaluable. Dunphy et al. used IMS to separate amyloid isoforms, providing insight into structural variations associated with disease progression in multiple myeloma and amyloidosis. Umapathy et al. integrated IMS with DIA for salivary proteomics, identifying novel biomarkers such as matrix metalloproteinases for oral cancer. Large‐scale studies using trapped ion mobility spectrometry (TIMS) have greatly improved the detection and quantification of plasma proteins, facilitating biomarker discovery and advancing both basic and clinical research. In addition, Tomioka et al. compared digestion workflows with and without IMS and found significant improvements in the detection and quantification of host cell proteins in bottom‐up proteomics. Automated systems for proteomic analysis have significantly advanced clinical diagnostics and research by improving efficiency, sensitivity, and reproducibility. The study by Dasari et al. examines the MASS‐FIX system, an advanced automated MALDI‐TOF‐MS method for the detection and isotyping of monoclonal proteins (M‐proteins) associated with plasma cell disorders such as multiple myeloma and amyloidosis. The automation focuses on the sample preparation phase, which is performed by robotic liquid handlers. This includes immunoenrichment of immunoglobulin subclasses and light chains, followed by automated spotting onto MALDI plates. MALDI‐TOF‐MS analysis and data interpretation are also automated, supported by software capable of efficiently identifying abnormal mass patterns indicative of M‐proteins. MASS‐FIX was shown to be superior to traditional immunofixation electrophoresis (IFE), offering higher sensitivity and fewer false negatives, particularly in the detection of glycosylated light chains, which are associated with a higher risk of light chain amyloidosis (AL) and other plasma cell disorders. It also demonstrated efficiency in the detection of therapeutic monoclonal antibodies, reducing potential diagnostic interference. With a sample recall rate of less than 1.5% and a 30% increase in throughput over IFE, MASS‐FIX exemplifies the integration of automated MS into clinical workflows to improve diagnostic accuracy and laboratory efficiency. Other innovations include the autoPOTS platform developed by Liang et al. , which automates low‐input proteome profiling with high precision, making it suitable for streamlined workflows in clinical and research laboratories. Similarly, SP3, presented by Müller et al. , automates sample preparation to improve reproducibility and scalability in proteomic workflows, especially for low‐input samples. Messner et al. demonstrated the potential of ultra‐high‐throughput proteomics with automated systems for the analysis of COVID‐19 patient plasma, demonstrating the speed and scalability required for large‐scale diagnostic studies. In addition, high‐throughput LC‐MS/MS platforms, as exemplified by Smit et al. , have been implemented to provide accurate and efficient quantitative proteomic analyses that meet the needs of modern clinical laboratories. Together, these automated systems highlight the transformative role of automation in proteomic workflows, providing scalable, robust, and clinically relevant solutions for disease diagnostics and biomarker discovery. Database After acquiring fragmentation spectra from MS, the first critical step is the identification of peptide sequences. This can be accomplished by two main strategies: searching protein sequence databases or de novo peptide sequencing . Although MS generates complex spectral data, it does not inherently provide direct information about protein identity or quantity . Specialized software tools and bioinformatics methods are essential to interpret these spectra and transform raw data into actionable insights suitable for further analysis . Protein databases, such as UniProt (Universal Protein Resource) , SwissProt, and NCBI, play a central role in this process. UniProt is a carefully curated database containing comprehensive protein sequence data, including isoforms, PTMs, and functional annotations. By comparing experimental MS/MS spectra with theoretical spectra generated from these databases, software tools enable accurate protein identification, which is essential for subsequent interpretation. A variety of software programs facilitate this analysis, each optimized for specific tasks. Tools such as Mascot use probabilistic algorithms for peptide identification and take PTMs into account, making them highly reliable. SEQUEST , one of the first MS/MS algorithms, is still widely used, especially when integrated with Thermo Fisher Scientific systems. Open‐source software such as X!Tandem offers flexibility in search parameters, making it suitable for large data sets. To increase accuracy, researchers often use multiple programs in parallel; for example, the combination of Mascot, SEQUEST, and X!Tandem has been used to compensate for individual limitations and achieve more comprehensive results . Less common tools such as Crux are valued for their speed and efficiency , whereas proprietary solutions such as Thermo Fisher Scientific's Xcalibur, SCIEX's ProteinPilot, and Waters' ProteinLynx Global Server (PLGS) are tailored to specific instrumentation . For protein quantification, whether relative or absolute, tools such as MaxQuant , which integrates the Andromeda algorithm for high‐accuracy quantification using both label‐free and labeled approaches, are available. Thermo Fisher Scientific's Proteome Discoverer combines tools for identification and quantification, supporting different experimental workflows and increasing the versatility of proteomic analyses . In diseases such as amyloidosis and multiple myeloma, accurate protein identification and quantification are essential for understanding disease mechanisms and improving diagnostics. For example, immunoglobulin light chains, a hallmark of multiple myeloma, can be identified and quantified using specialized software and reference sequences in databases such as UniProt . Similarly, in amyloidosis, MS combined with specialized software and the UniProt database allows the identification of proteins that form amyloid deposits, providing insight into disease pathogenesis and potential therapeutic targets . Despite significant advances, challenges remain in integrating MS data with databases, including incomplete protein annotations and inconsistencies in spectral libraries. Future developments should focus on expanding databases with high‐resolution data and fostering collaboration between computational and experimental scientists. Such efforts are critical to improving disease diagnosis and biomarker discovery. Databases and computational tools are integral to proteomic workflows, transforming MS data into practical and actionable information. Their synergy with advanced MS instrumentation accelerates biomarker discovery and improves our understanding of diseases such as amyloidosis and multiple myeloma. |
Comprehensive Metabolomics in Mouse Mast Cell Model of Allergic Rhinitis for Profiling, Modulation, Semiquantitative Analysis, and Pathway Analysis | 457f3ffd-44c1-4ac2-83d8-98d2cb6a25b5 | 11763337 | Biochemistry[mh] | Allergic rhinitis, a prevalent allergic disorder, affects approximately 80 million Americans each year, manifesting through symptoms such as sneezing, itching, nasal congestion, and rhinorrhea. These symptoms result from inflammation in the nasal passages triggered by common allergens like pollen, hay, and dust . The condition not only alters the body’s immune response, leading to dizziness, fatigue, and body pain , but also significantly impacts the quality of life. Mast cells, the key effectors in the immune system, play a pivotal role in the pathophysiology of allergic rhinitis. These cells release various mediators upon activation, contributing to the characteristic symptoms of allergic rhinitis. As primary sources of cytokines, chemokines, and lipid mediators, mast cells significantly influence immune regulation and inflammation . Deciphering the specific mediators released from mast cells is pivotal in identifying targets and pathways for effective therapeutic strategies. Certain drugs are designed to target specific pathways within the immune response to manage allergic and inflammatory conditions. Triprolidine and zileuton are two such drugs, each with a distinct mechanism of action. Triprolidine, an antihistamine, blocks histamine H1 receptors , reducing symptoms like itching and swelling. Zileuton, a leukotriene synthesis inhibitor, targets 5-lipoxygenase to prevent leukotriene production , which is crucial in asthma and allergic reactions. In LPS-stimulated mast cells, these drugs exhibit a complementary effect: triprolidine reduces histamine release to mitigate the immediate allergic response, whereas zileuton decreases the production of pro-inflammatory leukotrienes, potentially reducing longer-term inflammation. Therefore, triprolidine and zileuton were employed as positive controls to monitor the metabolomic alterations under allergic rhinitis conditions. Metabolomics, the study of metabolites—small-molecule substrates, intermediates, and products of metabolism in cells, tissues, or organisms—offers insights into the downstream effects of genomic and proteomic changes. As metabolites are the end products of cellular processes, metabolomics provides a snapshot of the cell’s physiological state, emerging as a critical tool in biomedical and clinical research . This field encompasses both targeted and untargeted approaches. Targeted metabolomics is dedicated to quantifying specific metabolites from a predefined set, whereas untargeted metabolomics provides a broader view, analyzing a wide spectrum of metabolites without pre-existing knowledge about their identities. Consequently, untargeted metabolomics is particularly advantageous for exploring unknown biomarkers and metabolic pathways, thereby unraveling the complexities of allergic rhinitis at the molecular level . The advancements in metabolomics methodologies, notably the adoption of UHPLC-QTOF-MS/MS systems, have substantially improved metabolite identification’s sensitivity, resolution, and speed. These high-resolution techniques enable accurate mass measurements and fragmentation patterns, facilitating precise metabolite identification . Traditional methods like ELISA and Western blot, while useful in studying specific proteins associated with allergic responses, lack the capacity to provide a global view of cellular metabolism . In contrast, UHPLC-QTOF-MS/MS-based metabolomics allows for the simultaneous detection of a wide range of metabolites, offering a more comprehensive understanding of the molecular events in mast cells during allergic rhinitis. This method complements traditional protein-centric approaches, adding a crucial layer of information for a complete understanding of cellular responses in allergic conditions and enabling the investigation of pathways and semiquantitative analysis of metabolites. This study focuses on developing an allergic rhinitis model using cell metabolomics. In contrast to previous mast cell studies that primarily employed targeted metabolomics approaches, such as ELISA, SeaHorse, and other techniques , we utilize UHPLC-QTOF-MS/MS technology for comprehensive metabolite profiling, identification, and quantification. This advanced approach allows us to measure a broad spectrum of metabolites under various experimental conditions, providing insight into the metabolic pathways affected by allergy induction and therapeutic interventions. Notably, this is the first report utilizing UHPLC-QTOF-MS-based untargeted metabolomics in a mast cell model of allergic rhinitis. This model can be a valuable tool for developing and evaluating therapeutic agents for allergic rhinitis. 2.1. Chemical and Biological Reagents LC-MS-grade acetonitrile, methanol, and 99.8% pure acetic acid from Acros Organics were sourced from Thermo Fisher Scientific (Waltham, MA, USA). Deionized water was prepared using a Barnstead GenPure xCAD ultrapure water purification system from Thermo Fisher Scientific, ensuring a resistance of 18.2 MΩ. Dimethyl sulfoxide (DMSO) (BioUltra for molecular biology) with a purity exceeding 99.5%, along with ammonium acetate, triprolidine hydrochloride (Product # T6764), and zileuton (Product # PHR 2555), were procured from Sigma Aldrich (St. Louis, MO, USA). Prostaglandin E2-d4 (PGE2-d4) standard solution (500 μg/mL, in methyl acetate) with a purity above 99% was purchased from Cayman Chemical (Ann Arbor, MI, USA) and used as the internal standard (IS) master stock for this work. Murine mast cells (MC/9) were purchased from the American Type Culture Collection (ATCC) (Manassas, VA, USA). We acquired Gibco™ L-glutamine (200 mM, 100X) and Cosmic calf™ serum from Thermo Fisher Scientific, and Dulbecco’s Modified Eagle Medium (DMEM), 2-mercaptoethanol (BioReagent, purity 99%), and lipopolysaccharide (LPS) ( Escherichia coli O55:B5 ) from Sigma Aldrich. Additionally, we used D-PBS (1X) obtained from the Cleveland Clinic (Cleveland, OH, USA). The working stock solution of PGE2-d4 was prepared at a concentration of 10.0 μg/mL. This was achieved by diluting 20.0 μL of the 500 μg/mL master stock solution with 980 μL of a mixed solvent (acetonitrile/methanol/water in a 2:2:1 ratio). The master stock solutions of triprolidine (1.00 mM) and zileuton (1.00 mM) were prepared by dissolving the respective compounds accurately in 1.00 mL of sterile water and DMSO, respectively. All solutions were stored in the dark at −80 °C when not used. 2.2. Cell Culture and Studies MC/9 cells were cultured in 10.0 cm 3 tissue culture plates (VWR, Radnor, PA, USA). Each plate contained 10.0 mL of cell culture medium, formulated with 10% Cosmic calf ® serum, 2 mM L-glutamine, and 0.05 mM 2-mercaptoethanol. The cultures were maintained in a humidified incubator at 37 °C with a 5% CO 2 atmosphere, and the medium was renewed every three days. Our experimental design encompassed four conditions: control, stimulated, and two treatment groups with positive controls (triprolidine and zileuton). The cells were counted using a hemocytometer and microscope with trypan blue staining solution (0.4%). Two biological replicates were used for each experimental condition. The exact number of cells (5 × 10 6 ) was used for every biological replicate for each experimental condition. The control group cells were maintained solely in the culture medium without additives for 20 h. For the stimulated group, allergic rhinitis conditions were induced using LPS at a final concentration of 1.00 μg/mL in the culture plates for 4 h. Triprolidine and zileuton, used as positive controls, were applied to the stimulated cells at the final concentrations of 40.0 μM and 0.50 μM, respectively, for 12 h, following the protocols established in previous studies . The cells from each experimental condition were collected and transferred to 15.0 mL centrifuge tubes from VWR. A centrifugation step was performed at 1000× g for 5 min to separate the cells from the medium. The cell pellets were washed with ice-cold PBS (1×, pH 7.4) and rinsed with ice-cold water to remove residual PBS. Finally, the washed cell pellets were stored at −80 °C until further analysis. 2.3. Cell Sample Preparation Each cell pellet was suspended in 1.00 mL of ice-cold deionized water in glass culture tubes. These glass culture tubes were submerged in a beaker containing ice and sonicated for 10 s (2s/cycle × 5 cycles) using a sonifier from Branson Ultrasonics (Danbury, CT, USA) to lyse the cells by disrupting the cell membrane and denature proteins. The protein concentration in the cell lysate was measured using a Pierce™ BCA protein assay kit from Thermo Fisher Scientific following the manufacturer’s instructions to normalize the cell growth factors across the samples. The final protein concentration in each cell lysate was adjusted to 100 μg/mL by adding deionized water. A protein precipitation procedure was employed for metabolite extraction. A total of 1.00 mL of each cell lysate was mixed with 2 mL of ice-cold acetonitrile and 1 mL of ice-cold methanol. The sample mixture was vigorously vortexed for 5 min, then stored at −20 °C overnight to maximize protein precipitation. The following day, the samples were centrifuged at 13,000× g for 5 min at 4 °C using a Sorvall Legend XTR centrifuge from Thermo Fisher Scientific and the supernatants were collected in fresh borosilicate glass culture tubes (16 × 100 mm) from Thermo Fisher Scientific (Pittsburgh, PA, USA). The samples were dried in an ice bath using an N-EVAP TM 111 nitrogen evaporator from Organomation (West Berlin, MA, USA). Once dried, each sample was reconstituted in 120 μL of a reconstitution solvent (acetonitrile/methanol/water in a 2:2:1 ratio), and 30.0 μL of PGE2-d4 working stock solution was added to achieve a final IS concentration of 2.00 μg/mL. The prepared samples were then transferred into HPLC vials for the subsequent untargeted metabolomic analysis. 2.4. UHPLC-QTOF-MS/MS System This study employed an Agilent 1290 Infinity II UHPLC system coupled with an Agilent 6545 QTOF mass spectrometer (Santa Clara, CA, USA). The UHPLC setup included essential components such as a solvent reservoir, degasser, binary pump, multisampler, and a column oven compartment. The mass spectrometer was equipped with a Dual Agilent Jet Stream Electrospray Ionization (Dual AJS-ESI) source. Chromatographic separation was achieved using a Waters XSelect™ HSS T3 (Milford, MA, USA) analytical column (2.1 × 150 mm, 2.5 μm) with a corresponding pre-column. The column oven temperature was set at 30 °C. A two-solvent mobile phase system was used: solvent A (5 mM ammonium acetate in 0.1% acetic acid aqueous solution) and solvent B (5 mM ammonium acetate and 0.1% acetic acid in a mix of methanol and acetonitrile, 80:20). The flow rate was maintained at 0.17 mL/min with a specific gradient elution program as follows: 0.00–1.00 min (40% B), 12.0 min (75% B), 20.0 min (85% B), 28.0–38.0 min (100% B), 40.0 min (75% B), returning to 40.0–45.0 min (40% B). Each chromatographic run included a 10 min column pre-equilibration at initial conditions (40% B). The multisampler was maintained at a temperature of 5 °C, and 5.00 μL of sample was injected for each analysis. Each biological replicate of all experimental conditions was injected twice as technical replicates. The Agilent 6545 QTOF Mass Spectrometer operated in both positive and negative electrospray ionization (ESI) modes. Data acquisition was conducted using Agilent MassHunter Data Acquisition software (Version B:10.1.48), set to Auto MS/MS acquisition mode. The Dual AJS-ESI source conditions were meticulously configured: drying gas (N 2 ) was maintained at 200 °C with a flow rate of 10.0 L/min; nebulizer gas (N 2 ) pressure was set at 35 psi; and sheath gas (N2) was maintained at a temperature of 300 °C with a flow rate of 11.0 L/min. The instrument parameters were carefully adjusted, including capillary voltage at 2500 V; nozzle voltage at 100 V; fragmentor voltage at 100 V; skimmer voltage at 60 V; octupole RF voltage at 750 V; and collision energies were set at 0, 10, 20, and 40 eV. The mass spectrometer was tuned for a scan range of 50 to 1700 m / z at a rate of 5 spectra/s for MS scans, and 3 spectra/s for MS/MS scans with medium (~4 m / z ) isolation width. The mass spectrometer was calibrated using the Agilent tuning mix solution before analysis to ensure mass accuracy throughout the data acquisition. Additionally, real-time mass correction and validation were carried out using the reference mass solution at m / z 922.0098 and m / z 1221.9906 for positive ionization mode and m / z 112.9855 and m / z 1033.9881 for negative ionization mode. 2.5. Data Processing, Statistical Analysis, and Metabolite Identification Agilent MassHunter Data Acquisition software (Version: B.10.1.48) collected data from all cell studies (control, LPS-stimulated, and positive controls with triprolidine and zileuton) in both positive and negative ionization modes. The acquired data were stored in a (.d) file format. These files were subsequently analyzed using Agilent MassHunter Qualitative Analysis software (Version: B.10.0.1). This step involved assessing chromatographic peak shapes, retention times, and background noise in each mass spectrum. Agilent MassHunter Profinder software (Version: B.10.0.2) was used for batch recursive feature extraction (i.e., molecular feature extraction and find by ion). The (.d) files obtained from the Agilent MassHunter Qualitative Analysis software were imported and processed based on their ionization mode (either positive or negative) across the four study conditions. The retention times across all runs were aligned using the internal standard employed for this study, and a minimum mass spectral peak height was set at 1200 counts. For molecular feature extraction, the extraction parameters included a minimum mass spectral peak height of 1500 counts, the allowed ion species of [M + H] + , [M + Na] + , and [M + NH 4 ] + for the positive ion mode, and [M − H] − for the negative ion mode, the isotope model of common organic molecules without halogens, and the limit assigned charge states to a range of 1–2; the compound filters were set by default; the compound binning and alignment parameters included a retention time tolerance of 0.10% ± 0.30 min, and a mass tolerance of 20.00 ppm + 2.00 mDa; and the post-processing filters were set at an absolute height of at least 5000 counts for mass spectral peaks, a molecular feature extraction score of at least 75, and a minimum match of molecular feature at 75% (this meant a molecular feature must be present in 3 out of 4 replicate runs in each experimental condition to be included). For the find by ion, the matching tolerance and scoring parameters included a mass score of 100, isotope abundance and spacing scores of 60 and 50, respectively, and a retention score of 0; the EIC peak integration and filtering parameters included an absolute height of at least 7000 counts for chromatographic peaks; the spectrum extraction and centroiding parameters were set by default; and the post-processing filters included an absolute height of at least 7000 counts for chromatographic peak heights and a target score of at least 75.00. Finally, the data of all experimental groups by each ionization mode obtained from the operations of “molecular feature extraction” and “find by ion” were exported as profinder archive (.pfa) files from the Agilent MassHunter Profinder software. These (.pfa) files were uploaded to Agilent Mass Profiler Professional (MPP) software (Version: B.15.1.2) for statistical analysis and compound identification. In MPP, the UHPLC-QTOF-MS data were normalized using the exogenous internal standard (PGE2-d4) in each sample to correct for signal fluctuations during instrumental analysis (since samples were previously normalized by cell count and total protein content, this was the third stage of sample normalization to ensure quantitative comparison of the data). Molecular features from each experimental condition were subjected to metabolite identification in MPP using the “ID Browser” tool, referencing the METLIN accurate-mass metabolites and lipids databases. Metabolites with identification scores below 75% were excluded to minimize false positives. For each experimental condition, datasets containing two biological replicates and two technical replicates (i.e., BR1.TR1, BR1.TR2, BR2.TR1, and BR2.TR2) were exported from MPP as .csv files to create four independent datasets by averaging: Dataset 1 = (BR1.TR1 + BR2.TR1)/2, Dataset 2 = (BR1.TR2 + BR2.TR2)/2, Dataset 3 = BR1.TR1, and Dataset 4 = BR2.TR1. These independent datasets were then re-imported into MPP, and the median values for each condition were used to assess metabolite regulation. A one-way ANOVA followed by Tukey’s HSD test was applied, selecting for p -value, log2 fold change, and FDR of 0.05, using the Benjamini–Hochberg correction. Significantly regulated metabolites ( p < 0.05, log2 fold change > 2, or fold change > 4) were identified between the following conditions: control vs. LPS-stimulated, LPS-stimulated vs. triprolidine-treated post-LPS, and LPS-stimulated vs. zileuton-treated post-LPS. 2.6. Principal Component Analysis and Pathway Enrichment Analysis Principal component analysis (PCA) and pathway enrichment analysis were performed on the MetaboAnalyst 6.0 (available at https://www.metaboanalyst.ca/MetaboAnalyst/ , accessed on 28 May 2024). In detail, the (.csv) files of sample replicates (i.e., two biological and two technical replicates) of each group from the same polarity (positive) were exported from Agilent MassHunter Profinder containing data like mass, retention time, and peak area. Replicate measurements’ (.csv) files were grouped together in a single folder for each condition (for, e.g., replicate files of control data into the control group folder). All the experimental condition groups, control, LPS-stimulated, and two distinct positive controls (triprolidine and zileuton), were merged into one (.zip) folder and uploaded to MetaboAnalyst. The mass tolerance of 0.025 Da and retention time tolerance of 30.0 s were set for processing the MS peak list data. The data were normalized using the IS reference feature (i.e., mass, retention time, and peak area), and data filtering was performed based on an “interquartile range” (IQR) of 5% to remove variables and increase the accuracy. The data were log-transformed (base 10) and auto-scaled. The processed data were then used to plot the 2D PCA plot. Similar steps were performed for negative polarity to plot another 2D PCA plot. The (.csv) file containing the metabolite’s name and peak area for replicates of each experimental condition was exported from MPP for pathway analysis. Individual (.csv) files were generated to identify pathways regulated between two experimental groups (such as control vs. LPS-stimulated, LPS-stimulated vs. triprolidine positive control, and LPS-stimulated vs. zileuton positive control). This individual file was uploaded as a concentration table in the pathway analysis module of MetaboAnalyst, and the “ID type” and “data format” were set as “compound name” and “samples in column”, respectively. The data were log-transformed (base 10) and auto-scaled. The output parameters for pathway analysis were set to scatter plot (for testing significant features) for pathway analysis visualization, hypergeometric test for enrichment of the pathways identified, and relative-betweenness centrality for the topological analysis, and all the compounds in the pathway library were selected for metabolite ID reference. The Mus musculus (house mouse) organism was selected to obtain pathways from the KEGG database. 2.7. Semiquantitative Analysis Relative semiquantitative analysis was performed using the (.d) files of the same polarity of the replicate samples of each experimental group, obtained from Agilent MassHunter Data Acquisition software and their corresponding (.cef) files with metabolite identities from the Agilent MPP software. The (.d) files were imported as samples, and the (.cef) file was imported as a method file for processing the semiquantitative analysis into Agilent MassHunter Quantitative analysis (Q-TOF) software (Version 10.2). The internal standard (PGE2-d4 in this case) was flagged as ISTD and then annotated as ISTD for all the metabolites with a concentration of 2.00 µg/mL in the compound setup section, and the metabolites were set as targets. The relative ISTD option was selected to carry out the relative quantitation of metabolites to the known internal standard. After validating the method, a semiquantitative analysis was executed based on an individual metabolite’s peak area to the IS’s peak area multiplied by the IS concentration. The results were exported to a Microsoft Excel sheet for reporting. LC-MS-grade acetonitrile, methanol, and 99.8% pure acetic acid from Acros Organics were sourced from Thermo Fisher Scientific (Waltham, MA, USA). Deionized water was prepared using a Barnstead GenPure xCAD ultrapure water purification system from Thermo Fisher Scientific, ensuring a resistance of 18.2 MΩ. Dimethyl sulfoxide (DMSO) (BioUltra for molecular biology) with a purity exceeding 99.5%, along with ammonium acetate, triprolidine hydrochloride (Product # T6764), and zileuton (Product # PHR 2555), were procured from Sigma Aldrich (St. Louis, MO, USA). Prostaglandin E2-d4 (PGE2-d4) standard solution (500 μg/mL, in methyl acetate) with a purity above 99% was purchased from Cayman Chemical (Ann Arbor, MI, USA) and used as the internal standard (IS) master stock for this work. Murine mast cells (MC/9) were purchased from the American Type Culture Collection (ATCC) (Manassas, VA, USA). We acquired Gibco™ L-glutamine (200 mM, 100X) and Cosmic calf™ serum from Thermo Fisher Scientific, and Dulbecco’s Modified Eagle Medium (DMEM), 2-mercaptoethanol (BioReagent, purity 99%), and lipopolysaccharide (LPS) ( Escherichia coli O55:B5 ) from Sigma Aldrich. Additionally, we used D-PBS (1X) obtained from the Cleveland Clinic (Cleveland, OH, USA). The working stock solution of PGE2-d4 was prepared at a concentration of 10.0 μg/mL. This was achieved by diluting 20.0 μL of the 500 μg/mL master stock solution with 980 μL of a mixed solvent (acetonitrile/methanol/water in a 2:2:1 ratio). The master stock solutions of triprolidine (1.00 mM) and zileuton (1.00 mM) were prepared by dissolving the respective compounds accurately in 1.00 mL of sterile water and DMSO, respectively. All solutions were stored in the dark at −80 °C when not used. MC/9 cells were cultured in 10.0 cm 3 tissue culture plates (VWR, Radnor, PA, USA). Each plate contained 10.0 mL of cell culture medium, formulated with 10% Cosmic calf ® serum, 2 mM L-glutamine, and 0.05 mM 2-mercaptoethanol. The cultures were maintained in a humidified incubator at 37 °C with a 5% CO 2 atmosphere, and the medium was renewed every three days. Our experimental design encompassed four conditions: control, stimulated, and two treatment groups with positive controls (triprolidine and zileuton). The cells were counted using a hemocytometer and microscope with trypan blue staining solution (0.4%). Two biological replicates were used for each experimental condition. The exact number of cells (5 × 10 6 ) was used for every biological replicate for each experimental condition. The control group cells were maintained solely in the culture medium without additives for 20 h. For the stimulated group, allergic rhinitis conditions were induced using LPS at a final concentration of 1.00 μg/mL in the culture plates for 4 h. Triprolidine and zileuton, used as positive controls, were applied to the stimulated cells at the final concentrations of 40.0 μM and 0.50 μM, respectively, for 12 h, following the protocols established in previous studies . The cells from each experimental condition were collected and transferred to 15.0 mL centrifuge tubes from VWR. A centrifugation step was performed at 1000× g for 5 min to separate the cells from the medium. The cell pellets were washed with ice-cold PBS (1×, pH 7.4) and rinsed with ice-cold water to remove residual PBS. Finally, the washed cell pellets were stored at −80 °C until further analysis. Each cell pellet was suspended in 1.00 mL of ice-cold deionized water in glass culture tubes. These glass culture tubes were submerged in a beaker containing ice and sonicated for 10 s (2s/cycle × 5 cycles) using a sonifier from Branson Ultrasonics (Danbury, CT, USA) to lyse the cells by disrupting the cell membrane and denature proteins. The protein concentration in the cell lysate was measured using a Pierce™ BCA protein assay kit from Thermo Fisher Scientific following the manufacturer’s instructions to normalize the cell growth factors across the samples. The final protein concentration in each cell lysate was adjusted to 100 μg/mL by adding deionized water. A protein precipitation procedure was employed for metabolite extraction. A total of 1.00 mL of each cell lysate was mixed with 2 mL of ice-cold acetonitrile and 1 mL of ice-cold methanol. The sample mixture was vigorously vortexed for 5 min, then stored at −20 °C overnight to maximize protein precipitation. The following day, the samples were centrifuged at 13,000× g for 5 min at 4 °C using a Sorvall Legend XTR centrifuge from Thermo Fisher Scientific and the supernatants were collected in fresh borosilicate glass culture tubes (16 × 100 mm) from Thermo Fisher Scientific (Pittsburgh, PA, USA). The samples were dried in an ice bath using an N-EVAP TM 111 nitrogen evaporator from Organomation (West Berlin, MA, USA). Once dried, each sample was reconstituted in 120 μL of a reconstitution solvent (acetonitrile/methanol/water in a 2:2:1 ratio), and 30.0 μL of PGE2-d4 working stock solution was added to achieve a final IS concentration of 2.00 μg/mL. The prepared samples were then transferred into HPLC vials for the subsequent untargeted metabolomic analysis. This study employed an Agilent 1290 Infinity II UHPLC system coupled with an Agilent 6545 QTOF mass spectrometer (Santa Clara, CA, USA). The UHPLC setup included essential components such as a solvent reservoir, degasser, binary pump, multisampler, and a column oven compartment. The mass spectrometer was equipped with a Dual Agilent Jet Stream Electrospray Ionization (Dual AJS-ESI) source. Chromatographic separation was achieved using a Waters XSelect™ HSS T3 (Milford, MA, USA) analytical column (2.1 × 150 mm, 2.5 μm) with a corresponding pre-column. The column oven temperature was set at 30 °C. A two-solvent mobile phase system was used: solvent A (5 mM ammonium acetate in 0.1% acetic acid aqueous solution) and solvent B (5 mM ammonium acetate and 0.1% acetic acid in a mix of methanol and acetonitrile, 80:20). The flow rate was maintained at 0.17 mL/min with a specific gradient elution program as follows: 0.00–1.00 min (40% B), 12.0 min (75% B), 20.0 min (85% B), 28.0–38.0 min (100% B), 40.0 min (75% B), returning to 40.0–45.0 min (40% B). Each chromatographic run included a 10 min column pre-equilibration at initial conditions (40% B). The multisampler was maintained at a temperature of 5 °C, and 5.00 μL of sample was injected for each analysis. Each biological replicate of all experimental conditions was injected twice as technical replicates. The Agilent 6545 QTOF Mass Spectrometer operated in both positive and negative electrospray ionization (ESI) modes. Data acquisition was conducted using Agilent MassHunter Data Acquisition software (Version B:10.1.48), set to Auto MS/MS acquisition mode. The Dual AJS-ESI source conditions were meticulously configured: drying gas (N 2 ) was maintained at 200 °C with a flow rate of 10.0 L/min; nebulizer gas (N 2 ) pressure was set at 35 psi; and sheath gas (N2) was maintained at a temperature of 300 °C with a flow rate of 11.0 L/min. The instrument parameters were carefully adjusted, including capillary voltage at 2500 V; nozzle voltage at 100 V; fragmentor voltage at 100 V; skimmer voltage at 60 V; octupole RF voltage at 750 V; and collision energies were set at 0, 10, 20, and 40 eV. The mass spectrometer was tuned for a scan range of 50 to 1700 m / z at a rate of 5 spectra/s for MS scans, and 3 spectra/s for MS/MS scans with medium (~4 m / z ) isolation width. The mass spectrometer was calibrated using the Agilent tuning mix solution before analysis to ensure mass accuracy throughout the data acquisition. Additionally, real-time mass correction and validation were carried out using the reference mass solution at m / z 922.0098 and m / z 1221.9906 for positive ionization mode and m / z 112.9855 and m / z 1033.9881 for negative ionization mode. Agilent MassHunter Data Acquisition software (Version: B.10.1.48) collected data from all cell studies (control, LPS-stimulated, and positive controls with triprolidine and zileuton) in both positive and negative ionization modes. The acquired data were stored in a (.d) file format. These files were subsequently analyzed using Agilent MassHunter Qualitative Analysis software (Version: B.10.0.1). This step involved assessing chromatographic peak shapes, retention times, and background noise in each mass spectrum. Agilent MassHunter Profinder software (Version: B.10.0.2) was used for batch recursive feature extraction (i.e., molecular feature extraction and find by ion). The (.d) files obtained from the Agilent MassHunter Qualitative Analysis software were imported and processed based on their ionization mode (either positive or negative) across the four study conditions. The retention times across all runs were aligned using the internal standard employed for this study, and a minimum mass spectral peak height was set at 1200 counts. For molecular feature extraction, the extraction parameters included a minimum mass spectral peak height of 1500 counts, the allowed ion species of [M + H] + , [M + Na] + , and [M + NH 4 ] + for the positive ion mode, and [M − H] − for the negative ion mode, the isotope model of common organic molecules without halogens, and the limit assigned charge states to a range of 1–2; the compound filters were set by default; the compound binning and alignment parameters included a retention time tolerance of 0.10% ± 0.30 min, and a mass tolerance of 20.00 ppm + 2.00 mDa; and the post-processing filters were set at an absolute height of at least 5000 counts for mass spectral peaks, a molecular feature extraction score of at least 75, and a minimum match of molecular feature at 75% (this meant a molecular feature must be present in 3 out of 4 replicate runs in each experimental condition to be included). For the find by ion, the matching tolerance and scoring parameters included a mass score of 100, isotope abundance and spacing scores of 60 and 50, respectively, and a retention score of 0; the EIC peak integration and filtering parameters included an absolute height of at least 7000 counts for chromatographic peaks; the spectrum extraction and centroiding parameters were set by default; and the post-processing filters included an absolute height of at least 7000 counts for chromatographic peak heights and a target score of at least 75.00. Finally, the data of all experimental groups by each ionization mode obtained from the operations of “molecular feature extraction” and “find by ion” were exported as profinder archive (.pfa) files from the Agilent MassHunter Profinder software. These (.pfa) files were uploaded to Agilent Mass Profiler Professional (MPP) software (Version: B.15.1.2) for statistical analysis and compound identification. In MPP, the UHPLC-QTOF-MS data were normalized using the exogenous internal standard (PGE2-d4) in each sample to correct for signal fluctuations during instrumental analysis (since samples were previously normalized by cell count and total protein content, this was the third stage of sample normalization to ensure quantitative comparison of the data). Molecular features from each experimental condition were subjected to metabolite identification in MPP using the “ID Browser” tool, referencing the METLIN accurate-mass metabolites and lipids databases. Metabolites with identification scores below 75% were excluded to minimize false positives. For each experimental condition, datasets containing two biological replicates and two technical replicates (i.e., BR1.TR1, BR1.TR2, BR2.TR1, and BR2.TR2) were exported from MPP as .csv files to create four independent datasets by averaging: Dataset 1 = (BR1.TR1 + BR2.TR1)/2, Dataset 2 = (BR1.TR2 + BR2.TR2)/2, Dataset 3 = BR1.TR1, and Dataset 4 = BR2.TR1. These independent datasets were then re-imported into MPP, and the median values for each condition were used to assess metabolite regulation. A one-way ANOVA followed by Tukey’s HSD test was applied, selecting for p -value, log2 fold change, and FDR of 0.05, using the Benjamini–Hochberg correction. Significantly regulated metabolites ( p < 0.05, log2 fold change > 2, or fold change > 4) were identified between the following conditions: control vs. LPS-stimulated, LPS-stimulated vs. triprolidine-treated post-LPS, and LPS-stimulated vs. zileuton-treated post-LPS. Principal component analysis (PCA) and pathway enrichment analysis were performed on the MetaboAnalyst 6.0 (available at https://www.metaboanalyst.ca/MetaboAnalyst/ , accessed on 28 May 2024). In detail, the (.csv) files of sample replicates (i.e., two biological and two technical replicates) of each group from the same polarity (positive) were exported from Agilent MassHunter Profinder containing data like mass, retention time, and peak area. Replicate measurements’ (.csv) files were grouped together in a single folder for each condition (for, e.g., replicate files of control data into the control group folder). All the experimental condition groups, control, LPS-stimulated, and two distinct positive controls (triprolidine and zileuton), were merged into one (.zip) folder and uploaded to MetaboAnalyst. The mass tolerance of 0.025 Da and retention time tolerance of 30.0 s were set for processing the MS peak list data. The data were normalized using the IS reference feature (i.e., mass, retention time, and peak area), and data filtering was performed based on an “interquartile range” (IQR) of 5% to remove variables and increase the accuracy. The data were log-transformed (base 10) and auto-scaled. The processed data were then used to plot the 2D PCA plot. Similar steps were performed for negative polarity to plot another 2D PCA plot. The (.csv) file containing the metabolite’s name and peak area for replicates of each experimental condition was exported from MPP for pathway analysis. Individual (.csv) files were generated to identify pathways regulated between two experimental groups (such as control vs. LPS-stimulated, LPS-stimulated vs. triprolidine positive control, and LPS-stimulated vs. zileuton positive control). This individual file was uploaded as a concentration table in the pathway analysis module of MetaboAnalyst, and the “ID type” and “data format” were set as “compound name” and “samples in column”, respectively. The data were log-transformed (base 10) and auto-scaled. The output parameters for pathway analysis were set to scatter plot (for testing significant features) for pathway analysis visualization, hypergeometric test for enrichment of the pathways identified, and relative-betweenness centrality for the topological analysis, and all the compounds in the pathway library were selected for metabolite ID reference. The Mus musculus (house mouse) organism was selected to obtain pathways from the KEGG database. Relative semiquantitative analysis was performed using the (.d) files of the same polarity of the replicate samples of each experimental group, obtained from Agilent MassHunter Data Acquisition software and their corresponding (.cef) files with metabolite identities from the Agilent MPP software. The (.d) files were imported as samples, and the (.cef) file was imported as a method file for processing the semiquantitative analysis into Agilent MassHunter Quantitative analysis (Q-TOF) software (Version 10.2). The internal standard (PGE2-d4 in this case) was flagged as ISTD and then annotated as ISTD for all the metabolites with a concentration of 2.00 µg/mL in the compound setup section, and the metabolites were set as targets. The relative ISTD option was selected to carry out the relative quantitation of metabolites to the known internal standard. After validating the method, a semiquantitative analysis was executed based on an individual metabolite’s peak area to the IS’s peak area multiplied by the IS concentration. The results were exported to a Microsoft Excel sheet for reporting. 3.1. Validation of the UHPLC-QTOF-MS-Based Method In this work, a UHPLC-QTOF-MS-based method was developed and validated for untargeted and targeted metabolomic analysis of murine mast cell (MC/9) samples under various experimental conditions, including a negative control (untreated), an LPS-stimulated sample, and two drug-treated samples (i.e., triprolidine and zileuton) post-LPS-stimulation as the positive controls. This method employed an exogenous stable heavy isotope (PGE2-d4) as the internal standard to assess the matrix effects, mass and retention alignment, relative quantitation, and cross-comparison of the data between the experimental conditions. shows the representative total ion current chromatograms (TICs) of untargeted metabolomic profiling of cell samples under various experimental conditions, whereas shows the representative extracted ion chromatograms (EICs) of targeted metabolomic analysis of some individual metabolites in cell extracts. The effects of the sample matrix are crucial when using UHPLC-QTOF-MS-based methods in metabolomics studies . Therefore, the matrix effects in both positive and negative ionization modes were determined by spiking an internal standard PGE2-d4 into various cell samples. The matrix effects expressed as matrix factors were determined by the mean peak area of PGE2-d4 at a specified concentration in an extracted cell sample from an experimental condition over the mean peak area of PGE2-d4 at the same concentration in the mobile phase. indicated the matrix factors were 0.98, 0.82, 0.92, and 0.97 for negative electrospray ionization mode and 1.12, 1.05, 1.00, and 1.00 for positive electrospray ionization mode for the untreated control, LPS-stimulated, triprolidine-treated, and zileuton-treated cell samples, respectively. The matrix effects were corrected in this study since the peak area ratios of metabolites vs. IS were used in data analysis. The reproducibility of the untargeted metabolomics profiling was assessed by multivariate analysis and visualized by the principal component analysis (PCA) score plots. As shown in , the close grouping of replicate measurements (i.e., two biological and two technical replicates) for samples of each experimental group in the PCA score plot indicated excellent precision of the UHPLC-QTOF-MS-based method. These grouping clusters represented distinct metabolomic profiles of the experimental groups. The PCA plot showed that the principal component 1 (PC1) and principal component 2 (PC2) scores were 51.2% and 32.2%, respectively, accounting for 83.4% of the total variance for data acquired in positive ionization mode, whereas for negative ionization mode, the PC1 and PC2 scores were 64.5% and 24.1%, respectively, accounting for 88.6% of the total variance. As shown in , a vector shift from the control to the LPS-stimulated group indicated a significant metabolomic alteration under allergic stimulation. In contrast, vectors for triprolidine- and zileuton-treated groups showed a metabolomic profile shift from the LPS-stimulated group back toward the control group, suggesting a reversal from the LPS-stimulated profile. Furthermore, the reproducibility of the targeted metabolomic analysis was demonstrated by the coefficients of variation (CVs) of the metabolites in global semiquantitative analysis. shows the CVs of the replicate measurements of 44 regulated metabolites in the four experimental groups. Compared to the recommended values (at least 70% metabolites at CVs ≤ 15%) , there were 88.6–97.7% of metabolites measured with CVs ≤ 15% from the four experimental groups, indicating good reproducibility of the method for targeted metabolomic analysis. 3.2. Metabolite Identification and Semiquantitative Analysis The untargeted metabolomic profiling of samples from four experimental conditions (control (untreated), LPS-stimulated, triprolidine-treated post-LPS, and zileuton-treated post-LPS) yielded 3435 molecular features. These features were subjected to metabolite identification using the “ID Browser” tool within MPP and the METLIN accurate-mass metabolite and lipid databases. To ensure confidence in the identification of metabolites, molecular features extracted from a sample were analyzed using both MS and MS/MS data, which were matched against the METLIN database. METLIN libraries include MS and MS/MS spectra for standards, enabling a robust comparison. Metabolites were annotated with a passing score threshold of 75.0, ensuring a high confidence level in metabolite identifications. Prior to statistical analysis, four independent datasets were generated by averaging technical replicates from each experimental condition, which included two biological replicates and two technical replicates per condition. A one-way ANOVA followed by Tukey’s HSD test (with p -value < 0.05, log2 fold change > 2, and false discovery rate (FDR) of 0.05) identified 44 significantly regulated metabolites across the following comparisons: control vs. LPS-stimulated, LPS-stimulated vs. triprolidine-treated post-LPS, and LPS-stimulated vs. zileuton-treated post-LPS . These forty-four metabolites included twenty-two amino acids and other organic acids, two peptides, six leukotrienes, six lipids and ethers, five thromboxanes, and three other metabolites: histamine, aminofructose-6-phosphate, and lipoxin C4. The significantly regulated metabolites were further subjected to semiquantitative analysis, with their concentrations determined relative to the known concentration of the IS. These concentrations, ranging from picomolar to tens of nanomolar, are summarized in . 3.3. Metabolite Regulation and Pathway Analysis visualizes the contents of 44 significantly regulated metabolites across the four experimental conditions, highlighting statistical comparisons between control vs. LPS-stimulated, LPS-stimulated vs. triprolidine-treated post-LPS, and LPS-stimulated vs. zileuton-treated post-LPS with the detailed p -values and log2 fold change values supplemented in . This figure emphasizes the metabolic alterations in mast cells under conditions of induced allergic rhinitis and demonstrates the therapeutic effects of the positive control drugs, triprolidine and zileuton. In the LPS-stimulated group, 39 of the 44 metabolites were upregulated compared to the control group. The exceptions were aminofructose 6-phosphate, l -histidine, 2-hydroxy-4-hydroxymethylbenzalpyruvate, lipoxin C4, and 3-methylbutyl 2-oxopropanoate, which were downregulated. Compared to the LPS-stimulated group, the triprolidine-treated group showed downregulation in 35 of the 44 metabolites. However, aminofructose 6-phosphate, l -aspartic acid, (S)-beta-methylindolepyruvate, dihydroxyacetone phosphate acyl ester, enol pyruvate, l -histidine, l -isoleucine, thromboxane, and l -tryptophan were upregulated. Similarly, 31 of the 44 metabolites were downregulated in the zileuton-treated group compared to the LPS-stimulated group. Two metabolites, CerP(d18:1/20:0) and glutathione, remained unchanged, while aminofructose 6-phosphate, l -aspartic acid, (S)-beta-methylindolepyruvate, dihydroxyacetone phosphate acyl ester, enol pyruvate, ethyl pyruvate, l -histidine, l -isoleucine, thromboxane, thromboxane B1, and l -tryptophan, were upregulated. Pathway analysis was performed using MetaboAnalyst with the KEGG pathway database to elucidate the impact of metabolite regulation. Of the 44 significantly regulated metabolites, 10 were not identified by MetaboAnalyst. Therefore, the pathway analysis was based on 34 metabolites (see ). highlights pathways with an impact score ≥ 0.2 and a log10(p) ≥ 2 (equivalent to a p -value ≤ 0.01), indicating significantly altered metabolomic profiles under the experimental conditions. Specifically, six pathways—phenylalanine, tyrosine, and tryptophan biosynthesis; histidine metabolism; arachidonic acid metabolism; phenylalanine metabolism; sphingolipid metabolism; and glycine, serine, and threonine metabolism—were significantly altered in mast cells stimulated by LPS ( a). This suggests a complex and multifaceted cellular response, likely indicative of an inflammatory reaction. In contrast, treating LPS-stimulated mast cells with triprolidine ( b) significantly affected three pathways: histidine metabolism, sphingolipid metabolism, and glycine, serine, and threonine metabolism. This indicates that triprolidine not only acts as an antihistamine but also has broader effects on cellular metabolism, contributing to regulating the inflammatory response in mast cells. Similarly, treating LPS-stimulated mast cells with zileuton ( c) significantly modulated two pathways: arachidonic acid and sphingolipid metabolism. This highlights zileuton’s role in not only inhibiting a specific inflammatory pathway (leukotriene synthesis) but also broadly influencing cell signaling and inflammatory responses through its impact on sphingolipid metabolism. In this work, a UHPLC-QTOF-MS-based method was developed and validated for untargeted and targeted metabolomic analysis of murine mast cell (MC/9) samples under various experimental conditions, including a negative control (untreated), an LPS-stimulated sample, and two drug-treated samples (i.e., triprolidine and zileuton) post-LPS-stimulation as the positive controls. This method employed an exogenous stable heavy isotope (PGE2-d4) as the internal standard to assess the matrix effects, mass and retention alignment, relative quantitation, and cross-comparison of the data between the experimental conditions. shows the representative total ion current chromatograms (TICs) of untargeted metabolomic profiling of cell samples under various experimental conditions, whereas shows the representative extracted ion chromatograms (EICs) of targeted metabolomic analysis of some individual metabolites in cell extracts. The effects of the sample matrix are crucial when using UHPLC-QTOF-MS-based methods in metabolomics studies . Therefore, the matrix effects in both positive and negative ionization modes were determined by spiking an internal standard PGE2-d4 into various cell samples. The matrix effects expressed as matrix factors were determined by the mean peak area of PGE2-d4 at a specified concentration in an extracted cell sample from an experimental condition over the mean peak area of PGE2-d4 at the same concentration in the mobile phase. indicated the matrix factors were 0.98, 0.82, 0.92, and 0.97 for negative electrospray ionization mode and 1.12, 1.05, 1.00, and 1.00 for positive electrospray ionization mode for the untreated control, LPS-stimulated, triprolidine-treated, and zileuton-treated cell samples, respectively. The matrix effects were corrected in this study since the peak area ratios of metabolites vs. IS were used in data analysis. The reproducibility of the untargeted metabolomics profiling was assessed by multivariate analysis and visualized by the principal component analysis (PCA) score plots. As shown in , the close grouping of replicate measurements (i.e., two biological and two technical replicates) for samples of each experimental group in the PCA score plot indicated excellent precision of the UHPLC-QTOF-MS-based method. These grouping clusters represented distinct metabolomic profiles of the experimental groups. The PCA plot showed that the principal component 1 (PC1) and principal component 2 (PC2) scores were 51.2% and 32.2%, respectively, accounting for 83.4% of the total variance for data acquired in positive ionization mode, whereas for negative ionization mode, the PC1 and PC2 scores were 64.5% and 24.1%, respectively, accounting for 88.6% of the total variance. As shown in , a vector shift from the control to the LPS-stimulated group indicated a significant metabolomic alteration under allergic stimulation. In contrast, vectors for triprolidine- and zileuton-treated groups showed a metabolomic profile shift from the LPS-stimulated group back toward the control group, suggesting a reversal from the LPS-stimulated profile. Furthermore, the reproducibility of the targeted metabolomic analysis was demonstrated by the coefficients of variation (CVs) of the metabolites in global semiquantitative analysis. shows the CVs of the replicate measurements of 44 regulated metabolites in the four experimental groups. Compared to the recommended values (at least 70% metabolites at CVs ≤ 15%) , there were 88.6–97.7% of metabolites measured with CVs ≤ 15% from the four experimental groups, indicating good reproducibility of the method for targeted metabolomic analysis. The untargeted metabolomic profiling of samples from four experimental conditions (control (untreated), LPS-stimulated, triprolidine-treated post-LPS, and zileuton-treated post-LPS) yielded 3435 molecular features. These features were subjected to metabolite identification using the “ID Browser” tool within MPP and the METLIN accurate-mass metabolite and lipid databases. To ensure confidence in the identification of metabolites, molecular features extracted from a sample were analyzed using both MS and MS/MS data, which were matched against the METLIN database. METLIN libraries include MS and MS/MS spectra for standards, enabling a robust comparison. Metabolites were annotated with a passing score threshold of 75.0, ensuring a high confidence level in metabolite identifications. Prior to statistical analysis, four independent datasets were generated by averaging technical replicates from each experimental condition, which included two biological replicates and two technical replicates per condition. A one-way ANOVA followed by Tukey’s HSD test (with p -value < 0.05, log2 fold change > 2, and false discovery rate (FDR) of 0.05) identified 44 significantly regulated metabolites across the following comparisons: control vs. LPS-stimulated, LPS-stimulated vs. triprolidine-treated post-LPS, and LPS-stimulated vs. zileuton-treated post-LPS . These forty-four metabolites included twenty-two amino acids and other organic acids, two peptides, six leukotrienes, six lipids and ethers, five thromboxanes, and three other metabolites: histamine, aminofructose-6-phosphate, and lipoxin C4. The significantly regulated metabolites were further subjected to semiquantitative analysis, with their concentrations determined relative to the known concentration of the IS. These concentrations, ranging from picomolar to tens of nanomolar, are summarized in . visualizes the contents of 44 significantly regulated metabolites across the four experimental conditions, highlighting statistical comparisons between control vs. LPS-stimulated, LPS-stimulated vs. triprolidine-treated post-LPS, and LPS-stimulated vs. zileuton-treated post-LPS with the detailed p -values and log2 fold change values supplemented in . This figure emphasizes the metabolic alterations in mast cells under conditions of induced allergic rhinitis and demonstrates the therapeutic effects of the positive control drugs, triprolidine and zileuton. In the LPS-stimulated group, 39 of the 44 metabolites were upregulated compared to the control group. The exceptions were aminofructose 6-phosphate, l -histidine, 2-hydroxy-4-hydroxymethylbenzalpyruvate, lipoxin C4, and 3-methylbutyl 2-oxopropanoate, which were downregulated. Compared to the LPS-stimulated group, the triprolidine-treated group showed downregulation in 35 of the 44 metabolites. However, aminofructose 6-phosphate, l -aspartic acid, (S)-beta-methylindolepyruvate, dihydroxyacetone phosphate acyl ester, enol pyruvate, l -histidine, l -isoleucine, thromboxane, and l -tryptophan were upregulated. Similarly, 31 of the 44 metabolites were downregulated in the zileuton-treated group compared to the LPS-stimulated group. Two metabolites, CerP(d18:1/20:0) and glutathione, remained unchanged, while aminofructose 6-phosphate, l -aspartic acid, (S)-beta-methylindolepyruvate, dihydroxyacetone phosphate acyl ester, enol pyruvate, ethyl pyruvate, l -histidine, l -isoleucine, thromboxane, thromboxane B1, and l -tryptophan, were upregulated. Pathway analysis was performed using MetaboAnalyst with the KEGG pathway database to elucidate the impact of metabolite regulation. Of the 44 significantly regulated metabolites, 10 were not identified by MetaboAnalyst. Therefore, the pathway analysis was based on 34 metabolites (see ). highlights pathways with an impact score ≥ 0.2 and a log10(p) ≥ 2 (equivalent to a p -value ≤ 0.01), indicating significantly altered metabolomic profiles under the experimental conditions. Specifically, six pathways—phenylalanine, tyrosine, and tryptophan biosynthesis; histidine metabolism; arachidonic acid metabolism; phenylalanine metabolism; sphingolipid metabolism; and glycine, serine, and threonine metabolism—were significantly altered in mast cells stimulated by LPS ( a). This suggests a complex and multifaceted cellular response, likely indicative of an inflammatory reaction. In contrast, treating LPS-stimulated mast cells with triprolidine ( b) significantly affected three pathways: histidine metabolism, sphingolipid metabolism, and glycine, serine, and threonine metabolism. This indicates that triprolidine not only acts as an antihistamine but also has broader effects on cellular metabolism, contributing to regulating the inflammatory response in mast cells. Similarly, treating LPS-stimulated mast cells with zileuton ( c) significantly modulated two pathways: arachidonic acid and sphingolipid metabolism. This highlights zileuton’s role in not only inhibiting a specific inflammatory pathway (leukotriene synthesis) but also broadly influencing cell signaling and inflammatory responses through its impact on sphingolipid metabolism. UHPLC-QTOF-MS-based methods are superior for cell metabolomics studies, surpassing the capabilities of traditional targeted metabolite analysis and protein-centric methodologies. UHPLC provides high-resolution separation of metabolites, allowing for the analysis of complex biological samples with a wide range of polarities and molecular structures. QTOF-MS offers high mass accuracy and resolution, which is essential for identifying and quantifying a vast array of metabolites, including those at low concentrations. The combination of UHPLC and QTOF-MS enables the detection of low-abundance metabolites with high sensitivity, which is critical in cell metabolomics, where metabolites can vary significantly in concentration. The high specificity of QTOF-MS ensures accurate identification of metabolites, reducing the chances of false positives. Furthermore, UHPLC-QTOF-MS-based methods can perform both untargeted (discovery) and targeted (quantitative) metabolomics, providing a comprehensive view of the metabolome. Untargeted analysis allows the discovery of novel metabolites and pathways, while targeted analysis can validate and quantify known metabolites. The advantages of UHPLC-QTOF-MS-based methods over traditional targeted metabolite analysis include broader metabolite coverage, high throughput, and efficiency. UHPLC-QTOF-MS-based methods allow for simultaneously detection and quantification of a wide range of metabolites, offering a more holistic view of the metabolome. They can process multiple metabolites rapidly with high reproducibility, making them suitable for large-scale studies and high-throughput screening. In contrast, traditional targeted metabolite analysis focuses on a predefined set of metabolites, often requiring separate analyses for different metabolite groups. This limitation reduces the scope of the study and decreases efficiency. Compared to protein-centric methodologies such as Western blot and ELISA assays, UHPLC-QTOF-MS-based methods directly measure metabolites, providing a more immediate and accurate reflection of cellular metabolism and biochemical pathways. They also offer a broader dynamic range and higher quantitative accuracy, which is critical for detecting subtle changes in metabolite levels. Western blot and ELISA measure protein levels, which are indirect indicators of metabolic states and often suffer from a limited dynamic range and issues with antibody specificity and sensitivity. Metabolomics with UHPLC-QTOF-MS provides insights into metabolic pathways and their regulation, enabling a deeper understanding of cellular processes. In contrast, protein-centric methods focus on protein abundance and modifications, which may not directly correlate with metabolic changes. The downsides of using UHPLC-QTOF-MS for metabolomic investigations include the following: (i) Cost and complexity: The instrumentation and maintenance are expensive, and the technology demands specialized expertise for operation and data analysis. (ii) Matrix effects: Complex biological samples can introduce matrix effects, potentially compromising metabolite quantification accuracy. These issues can be addressed through the use of internal standards and properly prepared samples. (iii) Data complexity: The large datasets generated require advanced bioinformatics tools for processing and interpretation, which can be time-intensive. Despite these limitations, UHPLC-QTOF-MS remains a powerful tool for studying cell metabolism and drug efficacy. It offers high sensitivity, broad metabolite coverage, and the ability to elucidate complex biochemical pathways. While the method has challenges, its advantages make it highly valuable for advancing research in allergic rhinitis and evaluating therapeutic interventions. While multiple cell types, including eosinophils, basophils, epithelial cells, and lymphocytes, contribute to allergic rhinitis, mast cells are central to its pathophysiology. They initiate and propagate allergic inflammation by releasing key mediators such as histamine and lipid molecules, which drive the characteristic symptoms of allergic rhinitis. This makes mast cells the most suitable model for studying the disease. Although human mast cell lines like HMC-1 and LAD2 closely mimic human disease, their slow doubling time and demanding growth conditions make them less practical for initial method development and validation. Instead, mouse mast cells (MC/9) were chosen due to their faster growth, easier cultivation, and ability to produce large cell quantities. MC/9 cells are a well-established murine mast cell line widely used in immunological research. They share many features with primary mast cells and exhibit pathophysiology sufficiently representative of human mast cell behavior, making them a reliable model for studying allergic rhinitis . Mast cells, including MC/9 cells, can be activated by various pathogens and allergens through receptors such as the high-affinity IgE receptor (FcεRI), Toll-like receptors (TLRs), and others . Mast cells play critical roles in both innate and adaptive immunity. They act as first responders to pathogens (innate immunity) and modulate adaptive immune responses through interactions with other immune cells . Upon activation, mast cells degranulate, releasing a variety of pre-stored and newly synthesized inflammatory mediators, including histamine, cytokines, and leukotrienes . These inflammatory mediators are central to the pathophysiology of allergic rhinitis, contributing to symptoms such as nasal congestion, itching, sneezing, and rhinorrhea . This study investigates cell metabolomic alterations associated with allergic responses. We identify metabolic changes in response to pathogen (LPS) exposure and treatment with established anti-allergic drugs (triprolidine and zileuton). Additionally, we examine how these metabolic changes influence signal transduction pathways critical for mast cell activation and degranulation, confirming the therapeutic targets of the established drugs. LPS, a component of the outer membrane of Gram-negative bacteria, is recognized by Toll-like receptor 4 (TLR4) on the surface of MC/9 cells, facilitated by the co-receptor MD-2 and the accessory protein CD14 . The binding of LPS to TLR4 initiates a complex signaling cascade involving the recruitment of adaptor proteins MyD88 (myeloid differentiation primary response 88) and TRIF (TIR-domain-containing adapter-inducing interferon-β) . MyD88-dependent pathways predominantly lead to the activation of NF-κB (nuclear factor kappa B) and MAPK (mitogen-activated protein kinase) pathways, while TRIF-dependent pathways are associated with IRF3 activation and type I interferon production . Activation of these pathways results in the transcription and secretion of various pro-inflammatory cytokines and chemokines. MC/9 cells produce and release tumor necrosis factor-alpha (TNF-α), interleukin-6 (IL-6), and interleukin-1 beta (IL-1β) in response to LPS stimulation, which is crucial in mediating inflammatory responses and recruiting other immune cells . Although LPS is less potent in inducing degranulation than allergens that crosslink IgE receptors, it can still trigger the release of pre-stored mediators in mast cells, including histamine, proteases, and other bioactive compounds contributing to the inflammatory response . LPS activation also leads to the upregulation of co-stimulatory molecules such as CD40, CD80, and CD86 on the surface of MC/9 cells, enhancing their ability to interact with T cells and other immune cells, thereby facilitating the adaptive immune response . The overall functional state of MC/9 cells is modulated upon LPS activation, including enhanced antigen presentation capabilities and altered cytokine profiles, influencing interactions with dendritic cells, B cells, and other immune system components . In this study, we detected and identified significantly regulated metabolites in the MyD88-dependent pathways . In the NF-κB pathway, transcription factors AP1 (activator protein 1) and SP1 (specificity protein 1) enhance the transcriptional activity of the HDC (histidine decarboxylase) gene by binding to its promoter region, leading to increased conversion of histidine into histamine, an inflammatory mediator in allergic reactions . Our experimental data support this, showing strongly upregulated histamine and downregulated L-histidine when mast cells were stimulated by LPS. In the MAPK pathway, cPLA2 (cyclic phospholipase A2) is phosphorylated and activated , leading to the production of arachidonic acid . Arachidonic acid is a substrate for COX-2 (cyclooxygenase-2) and ALOX5 (arachidonate 5-lipoxygenase) to synthesize various inflammatory mediators. Our experimental data also show that the upregulation of arachidonic acid leads to a cascade upregulation of HPETEs (hydroperoxyeicosatetraenoic acids: 8S,15S-diHPETE, 9-HpETE), leukotrienes (A4, B3, B4, B5, D4, E4), prostaglandins (PGF2α, PGI2), and thromboxanes (thromboxane, A2, A3, B1, B2). These metabolites play significant roles in inflammatory responses and are observed in the LPS-stimulated mast cells. When LPS-stimulated mast cells were treated with zileuton, an inhibitor of leukotriene synthesis that works by deactivating the ALOX5 enzyme, leukotrienes were downregulated, as expected, and histamine levels were almost unaffected . Leukotrienes are inflammatory mediators that contribute to allergic responses, including allergic rhinitis, causing bronchoconstriction, mucus production, and increased vascular permeability, leading to symptoms like nasal congestion and sinus pressure . Treating with zileuton, which inhibits leukotriene production, shows the role of leukotrienes in allergic rhinitis and the potential effectiveness of leukotriene inhibitors as therapeutic agents. When LPS-stimulated mast cells were treated with triprolidine, an antihistamine that blocks the H1 receptor, histamine levels were notably downregulated, and L-histidine, the histamine precursor, exhibited an inverse trend to histamine, suggesting shifts in the histamine synthesis pathway. This study utilized two well-known pharmacological agents, triprolidine and zileuton, as positive controls, allowing for a comprehensive investigation of the cellular and molecular mechanisms underlying allergic rhinitis. It also provided a means to evaluate the therapeutic effectiveness of targeting different mediators (histamine and leukotrienes) in modulating allergic responses. Pathway analysis revealed significant alterations in six metabolic pathways in mast cells stimulated by LPS ( a), suggesting a coordinated metabolic reprogramming in response to LPS stimulation. This reprogramming likely supports the production of inflammatory mediators, enhances inflammation-related signaling processes, and adjusts cellular metabolism to meet the energetic and biosynthetic demands of the immune response. Here is an overview of the potential changes in each pathway: (1) Phenylalanine, tyrosine, and tryptophan biosynthesis: These amino acids are precursors to neurotransmitters such as dopamine, norepinephrine, and serotonin. Alterations in this pathway may indicate increased synthesis of these compounds, which can modulate immune responses and inflammation. (2) Histidine metabolism: Histidine is a precursor to histamine, a well-known mediator of allergic responses and inflammation. Changes in this pathway suggest increased histamine production, contributing to the inflammatory response. (3) Arachidonic acid metabolism: Arachidonic acid is crucial for the production of eicosanoids (prostaglandins, thromboxanes, and leukotrienes), which are potent mediators of inflammation. Alterations in this pathway indicate enhanced production of these inflammatory mediators. (4) Phenylalanine metabolism: This pathway is connected to the production of tyrosine and downstream neurotransmitters and bioactive molecules. Changes here further support the involvement of bioactive amines in the immune response. (5) Sphingolipid metabolism: Sphingolipids are essential components of cell membranes and play a role in signaling processes, including stress responses and inflammation. Alterations in this pathway suggest changes in cell membrane dynamics and signaling that are important for inflammatory responses. (6) Glycine, serine, and threonine metabolism: These amino acids are involved in various metabolic processes, including synthesizing proteins and nucleotides. Changes in this pathway reflect shifts in energy metabolism and biosynthetic activities in response to inflammatory stimuli. Treating LPS-stimulated mast cells with triprolidine ( b) modulates the immune response by affecting histamine signaling. This modulation leads to significant changes in histidine metabolism, reducing histamine levels or activity. These primary effects cascade into alterations in sphingolipid metabolism, impacting membrane dynamics and cell signaling, and glycine, serine, and threonine metabolism, affecting cell energy balance and biosynthesis. These changes suggest a shift towards a less inflammatory and more regulated metabolic state in response to triprolidine. Here is an analysis of the observed alterations: (1) Histidine metabolism: Triprolidine, an antihistamine, inhibits histamine receptor activity. This inhibition might trigger feedback mechanisms that alter histidine metabolism, potentially reducing the conversion of histidine to histamine or affecting overall histamine levels in the cells. The reduced histamine activity could impact various downstream processes, including inflammatory responses and cell signaling, leading to a rebalancing of metabolic flux through the histidine pathway. (2) Sphingolipid metabolism: Triprolidine’s effect on histamine receptors, which are G-protein-coupled receptors, might indirectly influence sphingolipid metabolism. Sphingolipids, such as Cer(d18:1/16:0) and Cer(d18:1/14:0), are vital for membrane structure and signaling. Changes in this pathway might reflect alterations in cell membrane dynamics and signal transduction, possibly due to reduced histamine-mediated signaling. Sphingolipids are also involved in the cellular stress response. By modulating histamine activity, triprolidine could impact sphingolipid-mediated stress response pathways, affecting cell survival and inflammatory processes. (3) Glycine, serine, and threonine metabolism: These amino acids play roles in numerous biosynthetic and metabolic processes. Triprolidine’s effect on histamine signaling might lead to changes in the cellular demand for these amino acids, altering their metabolism. Alterations in this pathway could indicate changes in energy metabolism and the synthesis of nucleotides and other molecules necessary for cell growth and repair. This could be a compensatory mechanism in response to the altered inflammatory environment due to triprolidine treatment. Therefore, triprolidine not only acts as an antihistamine but also has broader effects on cellular metabolism, contributing to regulating the inflammatory response in mast cells. Zileuton treatment of LPS-stimulated mast cells ( c) significantly modulates arachidonic acid and sphingolipid metabolisms, indicating its specific effects on the inflammatory response and cellular signaling. Here is a detailed analysis of the observed effects: (1) Arachidonic acid metabolism: LPS stimulation likely upregulates arachidonic acid metabolism, leading to increased production of eicosanoids, such as leukotrienes, prostaglandins, and thromboxanes , which are potent inflammatory mediators that amplify the immune response. Zileuton, an ALOX5 inhibitor, specifically inhibits the synthesis of leukotrienes from arachidonic acid. By blocking this pathway, zileuton reduces leukotriene production, thereby decreasing inflammation and mitigating the inflammatory response induced by LPS. This explains the significant modulation of arachidonic acid metabolism observed in our data. (2) Sphingolipid metabolism: LPS can alter sphingolipid metabolism, affecting cell membrane integrity, signaling, and the production of sphingolipid-derived mediators involved in inflammation and cell survival . The modulation of sphingolipid metabolism by zileuton suggests secondary effects beyond leukotriene inhibition. Changes in this pathway could indicate alterations in cell signaling pathways and membrane dynamics in immune responses. Zileuton may stabilize cell membranes and modulate inflammatory signaling by influencing sphingolipid metabolism, contributing to a broader anti-inflammatory effect. These findings suggest that zileuton not only targets leukotriene production but also has broader implications on cellular metabolism and signaling, enhancing its anti-inflammatory properties. Cell metabolomics analysis is crucial for elucidating the intricate modulation of biological pathways under various physiological and pathological conditions. This is notably exemplified in allergic rhinitis, where significant modulations in histidine metabolism and arachidonic acid metabolism pathways are observed. By employing targeted drug interventions with triprolidine and zileuton, modulation of these specific pathways has been documented, offering invaluable insights into these therapeutic agents’ molecular mechanisms of action. This study underscores the pivotal role of metabolite regulation in unraveling the complexities of disease pathogenesis and therapeutic interventions, thereby advancing our understanding of the molecular mechanisms underlying physiological responses and treatment outcomes. This study offers several notable strengths. It employs advanced UHPLC-QTOF-MS technology, enabling highly efficient separation and accurate mass detection, which enhances metabolite identification and semiquantitation. A semiquantitative approach was chosen over absolute quantitation due to the lack of internal standards for all metabolites and variability in response factors across compounds. To address this, a stable isotope-labeled internal standard was used to normalize peak areas and calculate relative quantitation. While this method does not provide absolute quantitation, it allows for reliable comparison of metabolite levels across experimental conditions, aligning well with the study’s objectives. The study successfully identifies significantly regulated metabolites and elucidates key metabolic pathways, providing valuable insights into the underlying metabolic dynamics. Furthermore, it establishes a testing model for potential therapeutic agents, offering insights into their efficacy and molecular mechanisms of action. However, this study has certain limitations. The single time-point data acquisition strategy captures only momentary snapshots of the dynamic metabolomic landscape, potentially missing critical changes during allergic rhinitis progression or the effects of positive controls. Additionally, using a reverse-phase LC system, while effective for analyzing non-polar to moderately polar metabolites, limits the coverage of the entire metabolome. Despite this, the study successfully identified crucial intermediates in key pathways (e.g., MyD88-dependent, NF-κB, and MAPK pathways) highly relevant to inflammatory and allergic responses. Furthermore, the study focuses solely on metabolite profiling and does not assess protein abundance or activity, which are pivotal in regulating metabolic processes. Integrating proteomic analyses could provide a more comprehensive understanding of the interplay between metabolic and proteomic pathways. We acknowledge the limitation of using only two biological replicates in this study. This decision was guided by the exploratory nature of the investigation, which aimed to establish an initial metabolomic profile of allergic rhinitis and identify potential metabolic pathways of interest. While increasing replicates would enhance statistical power, we reasoned that using a cell line with a consistent genetic background would minimize biological variability. Additionally, the resource-intensive nature of UHPLC-QTOF-MS-based metabolomics, including costs for consumables, instrument time, and data analysis, necessitated a focused experimental design. To mitigate variability from sample handling and preparation, a rigorous three-step normalization procedure was implemented, including (i) cell counting to ensure consistent cell numbers across replicates, (ii) cell protein content normalization to account for differences in cell growth, and (iii) internal standard calibration to correct instrumental variations during metabolite analysis. Multivariate statistical analyses such as principal component analysis (PCA) and coefficient of variation (CV) for concentration measurements supported the method’s reproducibility and reliability. These analyses demonstrated excellent consistency across replicates. Furthermore, the observed metabolic changes align with known biochemical pathways in allergic rhinitis, reinforcing the biological relevance of the findings. Accurate metabolite identification posed a challenge due to sample complexity and platform limitations. Although high-resolution mass spectrometry achieved accurate mass measurements (<2 ppm in MS/MS mode) and utilized spectral libraries for annotation, the absence of analytical standards remains a constraint. Stringent criteria, including isotopic pattern matching and MS/MS fragmentation matching (when available), were applied to enhance reliability. Despite these efforts, some metabolites may remain ambiguously identified or unannotated. The study did not include drug-treated controls (e.g., cell-treated triprolidine or zileuton without LPS stimulation). While such controls would provide insights into the drugs’ effects on baseline metabolism, the primary objective was to investigate metabolic changes in LPS-induced inflammation and the drugs’ modulatory effects in this context. While this study provides significant insights into metabolite regulation and pathway modulation, a more holistic approach that includes continuous monitoring and proteomic analysis could further enhance our understanding of the complex biological processes involved. This study provides significant insights into the metabolomic landscape of mast cells under conditions of allergic rhinitis and the impact of therapeutic interventions. Utilizing UHPLC-QTOF-MS-based untargeted and targeted metabolomics, we identified 44 significantly regulated metabolites, including histamine, leukotrienes, prostaglandins, thromboxanes, and ceramides. Pathway analysis revealed significant modulations in arachidonic acid metabolism, histidine metabolism, and sphingolipid metabolism, which are critical in the inflammatory response associated with allergic rhinitis. Our findings demonstrated that LPS-induced stimulation of mast cells results in significant metabolic changes indicative of an inflammatory state. Treatment with triprolidine and zileuton modulated these metabolic pathways, effectively reversing the metabolic shifts induced by LPS. Triprolidine primarily affected histidine and sphingolipid metabolism, whereas zileuton specifically targeted arachidonic acid and sphingolipid metabolism. Integrating advanced metabolomics techniques in this study provided comprehensive insights into the complex biochemical processes underpinning allergic rhinitis. This research not only enhances our understanding of mast cell metabolism in allergic responses but also highlights the potential of metabolomics in evaluating the efficacy of therapeutic agents. Future studies incorporating continuous monitoring and proteomic analysis could further unravel the dynamic interplay between metabolites and proteins in allergic inflammation. |
Gene metabolite relationships revealed metabolic adaptations of rice salt tolerance | 56fdfc9e-2734-4796-9f92-2de41e2e5455 | 11742878 | Biochemistry[mh] | Abiotic stress describes the negative impact of non-living environmental factors on plants, leading to a variety of responses that can alter biological processes like gene expression and cellular metabolism, as well as affect growth and development. This type of stress encompasses issues such as extreme temperatures, drought, flooding, salinity, metal toxicity, and nutrient deficiencies, each prompting distinct reactions. Key environmental challenges, particularly extreme temperatures, drought, and saline soils, significantly restrict plant survival and their distribution in natural ecosystems , . Soil salinization adversely affects both the yield and quality of crops. Salt stress poses a significant threat to plant growth, leading to decreased leaf expansion, stoma closure, a reduced photosynthetic rate, and a loss of biomass , . Salinization currently affects more than 800 million hectares of land on Earth, and projections suggested that by 2050, approximately half of cultivated land could be impacted by salinity , . Plants have developed adaptive mechanisms to counter salt stress by making changes at the morphological, physiological, biochemical, and molecular levels. They also adjust metabolite and gene expression to combat stress and minimize damage , . Rice ( Oryza sativa L.), a glycophyte, is highly sensitive to salt stress . The extent of salt tolerance varies among genotypes and stages of development. While rice is particularly vulnerable to salt stress during the seedling stage, it shows moderate tolerance during the tillering stage , . Certain genotypes have been recognized for their salt tolerance . In various studies, the genotype IR28 has been utilized as salt-sensitive rice for molecular investigations of salinity tolerance , . Additionally, there are reports indicating the genetic analysis and high salinity tolerance of the genotype CSR28 – . Reactive oxygen species (ROS) are a group of highly reactive molecules that contain oxygen, such as superoxide, hydrogen peroxide, and hydroxyl radicals. They are produced as natural byproducts of cellular metabolism and play important roles in cellular signaling and defense against pathogens. However, under severe abiotic stress conditions, excess of ROS is produced, causing damage to various cellular components, such as DNA, proteins, carbohydrates, lipids, and enzymes, ultimately triggering programmed cell death , . To prevent injuries, plants regulate ROS production effectively by employing a range of enzymatic and nonenzymatic antioxidants. Enzymatic antioxidants belonging to the plant defense system include peroxidase (POD), superoxide dismutase (SOD), glutathione reductase (GR), catalase (CAT), dehydroascorbate reductase (DHAR), ascorbate peroxidase (APX), and monodehydroascorbate reductase (MDHAR), while nonenzymatic antioxidants include ascorbate (AsA), flavonoids, carotenoids, stilbenes, tocopherols, and various vitamins , . Omics technologies, such as metabolomics, enable a system-wide analysis of metabolic processes, for example in response to salinity stress . Metabolite profiling is conducted by instruments such as gas chromatography–mass spectrometry (GC‒MS) and permits the study of plant responses to environmental stresses at the molecular level. The comprehensive quantitative and qualitative measurements of the cellular metabolites acquired from stress-treated tissues provide a broad view of plant physiological and molecular reactions to stresses. Furthermore, metabolites are considered the final product of gene expression and are closely related to phenotype, which doubles the value of their study – . Recent reports have indicated the role of metabolites such as amino acids, sugar alcohols, and organic acids in osmotic adjustment as osmolytes, ionic homeostasis, photosynthesis and leaf senescence in salt-treated rice – . Analyzing the metabolome and transcriptome together can provide precise insights into how genes and metabolites interact, allowing for a systematic exploration of metabolic pathway synthesis and regulation. This approach helps overcome the limitations of individual omics studies, providing a more detailed explanation of the expression patterns and involvement of key genes in metabolic adaptations . Wang et al. reported several genes in OsDRAP1 -mediated salt tolerance in rice by Pearson correlation analysis of transcript and metabolite levels. These genes were involved in key biosynthetic pathways of amino acids (proline, valine), organic acids (glyceric acid, phosphoenolpyruvic acid and ascorbic acid) and carbohydrate metabolism. In the present study, we used an association approach of metabolomics and gene expression data to elucidate metabolic adaptations of rice salt tolerance using various genotypes/organs/timepoints. Phenotypic evaluation confirmed the contrasting salinity tolerances of IR28 and CSR28 Phenotypic evaluation of IR28 and CSR28 rice seedlings 1 week after exposure to high salinity stress confirmed differences in their salinity tolerance (Fig. ). The differences between the genotypes were more prominent in the shoots than in the roots. The difference in shoot length among the genotypes increased from 3.7% under control conditions to 47.1% under salinity stress. Furthermore, the difference in shoot dry weight increased from 1% in the control treatment to 57.9% in the salinity treatment. Compared with CSR28, IR28 exhibited greater reductions in both shoot length and dry weight under salinity stress. The leaf RWC of salt-stressed sensitive IR28 plants decreased significantly (23.4%) compared to that of the salt-tolerant CSR28 plants, while no significant difference was detected between the genotypes under control conditions. Brown and tubular leaves appeared in most IR28 seedlings after 1 week of salt stress, while CSR28 seedlings displayed more green leaves. The CSR28 genotype had a significantly lower mean salinity score than the IR28 genotype, which indicated that CSR28 was more salinity tolerant. Effects of salt stress on H 2 O 2 and MDA contents and antioxidant enzyme activity Changes in the levels of H 2 O 2 and MDA provide insights into the capacity to combat ROS and lipid peroxidation under stress. Both H 2 O 2 (Fig. a) and MDA (Fig. b) levels increased in response to salinity stress in the organs of both genotypes. However, the increases were notably more pronounced under long-term stress in the sensitive genotype. The levels of H 2 O 2 and MDA in the roots of IR28 at 54-h timepoint increased compared to CSR28 by 206.6% and 164.7%, respectively, while the increases in the shoots were 216.6% and 166.6%, respectively. The results of the antioxidant enzyme activity revealed that the levels of both CAT and SOD increased in all conditions in response to salinity. Although no significant difference between the two genotypes in both organs at the 6-h timepoint, however CAT and SOD enzyme levels in the roots of CSR28 at 54-h timepoint, were elevated compared to IR28 by 226.6% and 162.1%, respectively, while those increases in the shoots were 214.6% and 180.1%, respectively (Fig. c,d). The metabolic responses of salt-stressed rice seedlings revealed the specific functions of metabolites in salinity tolerance GC‒MS analysis revealed 37 primary metabolites, including 18 amino acids (AAs), 5 sugars and sugar alcohols and 14 organic acids (OAs), in the roots and shoots of the CSR28 and IR28 genotypes at the 6-h and 54-h timepoints (the mean and standard variation values of relative metabolite levels are shown in Supplementary Table ). ANOVA revealed significant differences between 35 metabolites in roots and shoots, with an average phenotypic variation of 35.6% based on the salinity/control ratio (Supplementary Table ). Of the 37 identified metabolites, 26 presented a significant difference between two timepoints of 6 h and 54 h, explaining 8.2% of the phenotypic variation. The two genotypes displayed significant differences in 28 metabolites, which explained 9.7% of the phenotypic variation. Furthermore, the interaction of genotype × timepoint × organ was significant for 25 metabolites, which explained 4% of the phenotypic variation. The relative changes of the metabolites are shown as ratio of salinity/control in all conditions (Table ). In general, 89.3% of the metabolite changes were significant in response to salinity, of which 56.5% and 32.8% represented increased and decreased accumulation, respectively. Lactate had the smallest response to salinity under the different conditions. Amino acids (AAs) Among the 18 identified AAs, 94.4% exhibited significant changes in response to salinity stress, of which 83.3% and 11.1% exhibited increased and decreased accumulation, respectively. The greatest increases were observed for the metabolites in the salt-stressed shoots of CSR28 at the 54-h timepoint. These metabolites included isoleucine (42.8-fold), leucine (31.06-fold) and proline (36.05-fold). Compared with those in the roots of the genotypes at the 6-h timepoint, only three AAs (α-alanine, GABA and methionine) were increased in CSR28 compared to those in IR28, while 14 AAs were remarkably increased in CSR28 compared to those in IR28 at the 54-h timepoint. Furthermore, the accumulation of six and 12 AAs was greater in the shoots of CSR28 than in those of IR28 at the 6-h and 54-h timepoints, respectively (Table ). Sugars and sugar alcohols Out of the 5 sugars and sugar alcohols identified through GC‒MS analysis, 90% exhibited significant changes in response to salinity, with 55% and 35% increased and decreased accumulation, respectively. In the roots of CSR28, raffinose (45.2-fold) and fructose (-11.1-fold) had the greatest and greatest changes, respectively, at the 54-h timepoint. After 6 h of salinity treatment, the glucose and raffinose contents in the roots of IR28 were greater, and the fructose content was lower than those in the roots of CSR28. However, at the 54-h timepoint, the raffinose and myoinositol contents in CSR28 were significantly greater, and the glucose and glycerol contents were lower than those in IR28. In the shoots, the values in IR28 were greater than those in CSR28 at both timepoints (Table ). Organic acids (OAs) Among the 14 OAs identified via metabolite profiling, 84.2% of the changes were significant in response to salinity under all conditions, including 22.4% and 61.8% increased and decreased accumulation, respectively. Furthermore, the maximum (8.4-fold) and minimum (-4.9-fold) changes were due to citrate in the roots of IR28 and quinate in the shoots of CSR28 at the 54-h timepoint, respectively. After 6 h of exposure to salinity, the concentrations of six OAs in CSR28 roots were greater than those in IR28 roots, while only the hydroxyglutarate concentration in CSR28 roots was greater than that in IR28 roots. Between CSR28 and IR28, nine OAs were differentially accumulated after 54 h of salinity treatment. After 6 h of salt exposure, the contents of six OAs in the shoots of IR28 were greater than those in the shoots of CSR28, while only the fumarate content in the shoots of CSR28 was greater than that in the shoots of IR28. Under long-term stress, a greater reduction in OAs was observed in CSR28 than in IR28 (Table ). Aspartate among AAs, myo-inositol among sugars and sugar alcohols and citrate, glycerate, isocitrate and shikimate among OAs showed organ-specific accumulation and increased only in roots in response to salinity stress. Among the OAs, only α-ketoglutarate and pyruvate were specifically accumulated between the genotypes in the salt-stressed shoots, so both decreased and increased in CSR28 and IR28, respectively. Hierarchical cluster analysis (HCA) grouped the metabolites and samples Heatmap was conducted to obtain an overview of metabolite profiling under different conditions. HCA grouped the metabolite data into two major clusters, roots and shoot, and each cluster into two distinct control and salinity stress subclusters (Fig. ). Furthermore, each subcluster was classified with respect to the timepoints of 6 h and 54 h, and each subcluster included two tolerant and sensitive genotypes. Maximum similarity of the timepoints of 6 h and 54 h was observed under the control condition in both organs, and this similarity was greater in the shoots than in the roots, while the difference between the metabolites at both timepoints increased significantly under salinity stress. The genotypes in both organs exhibited a maximum correlation under the control condition, and the difference in their metabolome increased under salinity stress. In the roots, the difference between the two genotypes at 54 h was greater than that at 6 h, while the difference in the shoots was greater at 6 h than at 54 h. In general, the correlations between the samples were as follows: genotype > timepoint > treatment > organ. Correlations between metabolites To further explain the relationships between metabolite contents in response to salinity stress, the correlations between amino acids, and between organic acids and carbohydrates were analyzed (Fig. ). The results indicated that there was a significant positive correlation between the content of most amino acids, except for the correlations of glycine with putrescine ( r = − 0.91, P value = 0.001), aspartate ( r = − 0.85, P value = 0.02), asparagine ( r = − 0.82, P value = 0.02), and β-alanine ( r = − 0.81, P value = 0.04), and the correlations of putrescine with proline ( r = − 0.85, P value = 0.03) and threonine ( r = − 0.81, P value = 0.001), which were negatively correlated (Fig. a). In the correlation analysis between organic acids and carbohydrates, diverse patterns of both positive and negative correlations were observed. For example, except for glycerol, the other carbohydrates were positively correlated with each other. Furthermore, glycerol followed a pattern similar to that of organic acids such as lactate and pyruvate (Fig. b). Expression of genes involved in the metabolism of metabolites and antioxidant enzyme activity under salinity stress Analysis of genes related to the accumulation of metabolites and antioxidant enzymes is highly important for understanding the synthesis of these compounds in response to salinity stress. Therefore, we focused on the key genes associated with the metabolites and antioxidant enzymes identified in this research (Fig. ). The expression of key genes involved in proline biosynthesis demonstrated that salinity stress led to the up-regulation of the genes OsP5CS2 , OsP5CR , and OsP5CS1 in most of the experimental samples. OsP5CS2 showed a significant increase in expression under all conditions except at the 6-h timepoint in the roots. The expression of this gene down-regulated in IR28, but did not change in CSR28. OsP5CR and OsP5CS1 showed elevated expression at both timepoints in the roots of CSR28, while a notable increase in the expression of OsP5CS1 occurred in all conditions in the shoots. The results of the expression of three genes involved in raffinose biosynthesis showed that OsRS2 had a significant increase in expression in response to salinity under all conditions, while OsNIN7 and OsEno5 were up-regulated in response to salinity in the roots, especially at the 54-h timepoint. Our findings also indicated an increase in the expression of the OsIMP-2 and OsMIOX genes involved in myoinositol biosynthesis in the roots. Among the four genes involved in glycolate metabolism, the expression of the OsGLO1 , OsGLO6 and OsPLGG1 genes significantly increased in response to salinity in the roots of CSR28 at the 54-h timepoint. Finally, key genes involved in the synthesis of antioxidant enzymes were studied, and the results showed that except of OsCatB which encodes CAT and is specifically expressed in the shoots, other genes were up-regulated in the roots. Remarkably, OsSOD-Fe and OsNCA1a exhibited a significant increase in their expression in response to salinity only in the roots of the tolerant genotype CSR28 at the 54-h timepoint. Linear regression analysis reveals the relationships between metabolites and antioxidant enzymes with their relevant genes in response to salinity stress Linear regression analysis was used to identify significant relationships between the contents of metabolites and antioxidant enzymes and their encoding genes. The results indicated that the proline and myoinositol contents were positively correlated with the expression of OsP5CS2 (R 2 = 0.81, P value = 0.03) and OsIMP (R 2 = 0.82, P value = 0.02), respectively. Among the three genes related to the CAT synthesis, only OsNCA1a (R 2 = 0.84, P value = 0.01) was significantly correlated with the enzyme content, while OsSOD-Fe (R 2 = 0.88, P value = 0.001) was positively related to the SOD content (Fig. ). Phenotypic evaluation of IR28 and CSR28 rice seedlings 1 week after exposure to high salinity stress confirmed differences in their salinity tolerance (Fig. ). The differences between the genotypes were more prominent in the shoots than in the roots. The difference in shoot length among the genotypes increased from 3.7% under control conditions to 47.1% under salinity stress. Furthermore, the difference in shoot dry weight increased from 1% in the control treatment to 57.9% in the salinity treatment. Compared with CSR28, IR28 exhibited greater reductions in both shoot length and dry weight under salinity stress. The leaf RWC of salt-stressed sensitive IR28 plants decreased significantly (23.4%) compared to that of the salt-tolerant CSR28 plants, while no significant difference was detected between the genotypes under control conditions. Brown and tubular leaves appeared in most IR28 seedlings after 1 week of salt stress, while CSR28 seedlings displayed more green leaves. The CSR28 genotype had a significantly lower mean salinity score than the IR28 genotype, which indicated that CSR28 was more salinity tolerant. 2 O 2 and MDA contents and antioxidant enzyme activity Changes in the levels of H 2 O 2 and MDA provide insights into the capacity to combat ROS and lipid peroxidation under stress. Both H 2 O 2 (Fig. a) and MDA (Fig. b) levels increased in response to salinity stress in the organs of both genotypes. However, the increases were notably more pronounced under long-term stress in the sensitive genotype. The levels of H 2 O 2 and MDA in the roots of IR28 at 54-h timepoint increased compared to CSR28 by 206.6% and 164.7%, respectively, while the increases in the shoots were 216.6% and 166.6%, respectively. The results of the antioxidant enzyme activity revealed that the levels of both CAT and SOD increased in all conditions in response to salinity. Although no significant difference between the two genotypes in both organs at the 6-h timepoint, however CAT and SOD enzyme levels in the roots of CSR28 at 54-h timepoint, were elevated compared to IR28 by 226.6% and 162.1%, respectively, while those increases in the shoots were 214.6% and 180.1%, respectively (Fig. c,d). GC‒MS analysis revealed 37 primary metabolites, including 18 amino acids (AAs), 5 sugars and sugar alcohols and 14 organic acids (OAs), in the roots and shoots of the CSR28 and IR28 genotypes at the 6-h and 54-h timepoints (the mean and standard variation values of relative metabolite levels are shown in Supplementary Table ). ANOVA revealed significant differences between 35 metabolites in roots and shoots, with an average phenotypic variation of 35.6% based on the salinity/control ratio (Supplementary Table ). Of the 37 identified metabolites, 26 presented a significant difference between two timepoints of 6 h and 54 h, explaining 8.2% of the phenotypic variation. The two genotypes displayed significant differences in 28 metabolites, which explained 9.7% of the phenotypic variation. Furthermore, the interaction of genotype × timepoint × organ was significant for 25 metabolites, which explained 4% of the phenotypic variation. The relative changes of the metabolites are shown as ratio of salinity/control in all conditions (Table ). In general, 89.3% of the metabolite changes were significant in response to salinity, of which 56.5% and 32.8% represented increased and decreased accumulation, respectively. Lactate had the smallest response to salinity under the different conditions. Amino acids (AAs) Among the 18 identified AAs, 94.4% exhibited significant changes in response to salinity stress, of which 83.3% and 11.1% exhibited increased and decreased accumulation, respectively. The greatest increases were observed for the metabolites in the salt-stressed shoots of CSR28 at the 54-h timepoint. These metabolites included isoleucine (42.8-fold), leucine (31.06-fold) and proline (36.05-fold). Compared with those in the roots of the genotypes at the 6-h timepoint, only three AAs (α-alanine, GABA and methionine) were increased in CSR28 compared to those in IR28, while 14 AAs were remarkably increased in CSR28 compared to those in IR28 at the 54-h timepoint. Furthermore, the accumulation of six and 12 AAs was greater in the shoots of CSR28 than in those of IR28 at the 6-h and 54-h timepoints, respectively (Table ). Sugars and sugar alcohols Out of the 5 sugars and sugar alcohols identified through GC‒MS analysis, 90% exhibited significant changes in response to salinity, with 55% and 35% increased and decreased accumulation, respectively. In the roots of CSR28, raffinose (45.2-fold) and fructose (-11.1-fold) had the greatest and greatest changes, respectively, at the 54-h timepoint. After 6 h of salinity treatment, the glucose and raffinose contents in the roots of IR28 were greater, and the fructose content was lower than those in the roots of CSR28. However, at the 54-h timepoint, the raffinose and myoinositol contents in CSR28 were significantly greater, and the glucose and glycerol contents were lower than those in IR28. In the shoots, the values in IR28 were greater than those in CSR28 at both timepoints (Table ). Organic acids (OAs) Among the 14 OAs identified via metabolite profiling, 84.2% of the changes were significant in response to salinity under all conditions, including 22.4% and 61.8% increased and decreased accumulation, respectively. Furthermore, the maximum (8.4-fold) and minimum (-4.9-fold) changes were due to citrate in the roots of IR28 and quinate in the shoots of CSR28 at the 54-h timepoint, respectively. After 6 h of exposure to salinity, the concentrations of six OAs in CSR28 roots were greater than those in IR28 roots, while only the hydroxyglutarate concentration in CSR28 roots was greater than that in IR28 roots. Between CSR28 and IR28, nine OAs were differentially accumulated after 54 h of salinity treatment. After 6 h of salt exposure, the contents of six OAs in the shoots of IR28 were greater than those in the shoots of CSR28, while only the fumarate content in the shoots of CSR28 was greater than that in the shoots of IR28. Under long-term stress, a greater reduction in OAs was observed in CSR28 than in IR28 (Table ). Aspartate among AAs, myo-inositol among sugars and sugar alcohols and citrate, glycerate, isocitrate and shikimate among OAs showed organ-specific accumulation and increased only in roots in response to salinity stress. Among the OAs, only α-ketoglutarate and pyruvate were specifically accumulated between the genotypes in the salt-stressed shoots, so both decreased and increased in CSR28 and IR28, respectively. Among the 18 identified AAs, 94.4% exhibited significant changes in response to salinity stress, of which 83.3% and 11.1% exhibited increased and decreased accumulation, respectively. The greatest increases were observed for the metabolites in the salt-stressed shoots of CSR28 at the 54-h timepoint. These metabolites included isoleucine (42.8-fold), leucine (31.06-fold) and proline (36.05-fold). Compared with those in the roots of the genotypes at the 6-h timepoint, only three AAs (α-alanine, GABA and methionine) were increased in CSR28 compared to those in IR28, while 14 AAs were remarkably increased in CSR28 compared to those in IR28 at the 54-h timepoint. Furthermore, the accumulation of six and 12 AAs was greater in the shoots of CSR28 than in those of IR28 at the 6-h and 54-h timepoints, respectively (Table ). Out of the 5 sugars and sugar alcohols identified through GC‒MS analysis, 90% exhibited significant changes in response to salinity, with 55% and 35% increased and decreased accumulation, respectively. In the roots of CSR28, raffinose (45.2-fold) and fructose (-11.1-fold) had the greatest and greatest changes, respectively, at the 54-h timepoint. After 6 h of salinity treatment, the glucose and raffinose contents in the roots of IR28 were greater, and the fructose content was lower than those in the roots of CSR28. However, at the 54-h timepoint, the raffinose and myoinositol contents in CSR28 were significantly greater, and the glucose and glycerol contents were lower than those in IR28. In the shoots, the values in IR28 were greater than those in CSR28 at both timepoints (Table ). Among the 14 OAs identified via metabolite profiling, 84.2% of the changes were significant in response to salinity under all conditions, including 22.4% and 61.8% increased and decreased accumulation, respectively. Furthermore, the maximum (8.4-fold) and minimum (-4.9-fold) changes were due to citrate in the roots of IR28 and quinate in the shoots of CSR28 at the 54-h timepoint, respectively. After 6 h of exposure to salinity, the concentrations of six OAs in CSR28 roots were greater than those in IR28 roots, while only the hydroxyglutarate concentration in CSR28 roots was greater than that in IR28 roots. Between CSR28 and IR28, nine OAs were differentially accumulated after 54 h of salinity treatment. After 6 h of salt exposure, the contents of six OAs in the shoots of IR28 were greater than those in the shoots of CSR28, while only the fumarate content in the shoots of CSR28 was greater than that in the shoots of IR28. Under long-term stress, a greater reduction in OAs was observed in CSR28 than in IR28 (Table ). Aspartate among AAs, myo-inositol among sugars and sugar alcohols and citrate, glycerate, isocitrate and shikimate among OAs showed organ-specific accumulation and increased only in roots in response to salinity stress. Among the OAs, only α-ketoglutarate and pyruvate were specifically accumulated between the genotypes in the salt-stressed shoots, so both decreased and increased in CSR28 and IR28, respectively. Heatmap was conducted to obtain an overview of metabolite profiling under different conditions. HCA grouped the metabolite data into two major clusters, roots and shoot, and each cluster into two distinct control and salinity stress subclusters (Fig. ). Furthermore, each subcluster was classified with respect to the timepoints of 6 h and 54 h, and each subcluster included two tolerant and sensitive genotypes. Maximum similarity of the timepoints of 6 h and 54 h was observed under the control condition in both organs, and this similarity was greater in the shoots than in the roots, while the difference between the metabolites at both timepoints increased significantly under salinity stress. The genotypes in both organs exhibited a maximum correlation under the control condition, and the difference in their metabolome increased under salinity stress. In the roots, the difference between the two genotypes at 54 h was greater than that at 6 h, while the difference in the shoots was greater at 6 h than at 54 h. In general, the correlations between the samples were as follows: genotype > timepoint > treatment > organ. To further explain the relationships between metabolite contents in response to salinity stress, the correlations between amino acids, and between organic acids and carbohydrates were analyzed (Fig. ). The results indicated that there was a significant positive correlation between the content of most amino acids, except for the correlations of glycine with putrescine ( r = − 0.91, P value = 0.001), aspartate ( r = − 0.85, P value = 0.02), asparagine ( r = − 0.82, P value = 0.02), and β-alanine ( r = − 0.81, P value = 0.04), and the correlations of putrescine with proline ( r = − 0.85, P value = 0.03) and threonine ( r = − 0.81, P value = 0.001), which were negatively correlated (Fig. a). In the correlation analysis between organic acids and carbohydrates, diverse patterns of both positive and negative correlations were observed. For example, except for glycerol, the other carbohydrates were positively correlated with each other. Furthermore, glycerol followed a pattern similar to that of organic acids such as lactate and pyruvate (Fig. b). Analysis of genes related to the accumulation of metabolites and antioxidant enzymes is highly important for understanding the synthesis of these compounds in response to salinity stress. Therefore, we focused on the key genes associated with the metabolites and antioxidant enzymes identified in this research (Fig. ). The expression of key genes involved in proline biosynthesis demonstrated that salinity stress led to the up-regulation of the genes OsP5CS2 , OsP5CR , and OsP5CS1 in most of the experimental samples. OsP5CS2 showed a significant increase in expression under all conditions except at the 6-h timepoint in the roots. The expression of this gene down-regulated in IR28, but did not change in CSR28. OsP5CR and OsP5CS1 showed elevated expression at both timepoints in the roots of CSR28, while a notable increase in the expression of OsP5CS1 occurred in all conditions in the shoots. The results of the expression of three genes involved in raffinose biosynthesis showed that OsRS2 had a significant increase in expression in response to salinity under all conditions, while OsNIN7 and OsEno5 were up-regulated in response to salinity in the roots, especially at the 54-h timepoint. Our findings also indicated an increase in the expression of the OsIMP-2 and OsMIOX genes involved in myoinositol biosynthesis in the roots. Among the four genes involved in glycolate metabolism, the expression of the OsGLO1 , OsGLO6 and OsPLGG1 genes significantly increased in response to salinity in the roots of CSR28 at the 54-h timepoint. Finally, key genes involved in the synthesis of antioxidant enzymes were studied, and the results showed that except of OsCatB which encodes CAT and is specifically expressed in the shoots, other genes were up-regulated in the roots. Remarkably, OsSOD-Fe and OsNCA1a exhibited a significant increase in their expression in response to salinity only in the roots of the tolerant genotype CSR28 at the 54-h timepoint. Linear regression analysis was used to identify significant relationships between the contents of metabolites and antioxidant enzymes and their encoding genes. The results indicated that the proline and myoinositol contents were positively correlated with the expression of OsP5CS2 (R 2 = 0.81, P value = 0.03) and OsIMP (R 2 = 0.82, P value = 0.02), respectively. Among the three genes related to the CAT synthesis, only OsNCA1a (R 2 = 0.84, P value = 0.01) was significantly correlated with the enzyme content, while OsSOD-Fe (R 2 = 0.88, P value = 0.001) was positively related to the SOD content (Fig. ). The present study assessed the responses of the roots and shoots of rice seedlings of two contrasting genotypes to high salinity. After 1 week of high salinity treatment, the length, biomass and dry weight of the IR28 shoots were lower than those of the CSR28 shoots (Fig. ). This is explained by the osmotic phase of salinity stress and consequently ionic toxicity, which accelerates the aging of older leaves and their necrosis due to salt accumulation . It seems that IR28 experienced both osmotic and ionic toxicity phases earlier and more severely. The CSR28 genotype exhibited greater growth vigor than that of the IR28 genotype under salinity stress, which indicated greater salinity tolerance. A faster growth can transfer Na + ions to shoots more slowly , . Furthermore, the rapid growth and development of cells prevent the accumulation of high salt concentrations , . The RWC, which is used to describe the water status of plant cells was significantly greater in salt-stressed CSR28 than in IR28. Numerous studies have reported that the RWC of tolerant genotypes is greater than that of sensitive ones , . An increased ability of plants to maintain water potential allows them to sustain photosynthetic activity, increase water use efficiency (WUE), and enhance their osmotic adjustment ability , . Plants exposed to salt stress undergo diverse physiological alterations . ROS such as H 2 O 2 and O 2 − are extremely reactive molecules that can accumulate at elevated levels during environmental stresses such as salt, drought, and cold, causing oxidative damage to plant cells . MDA is produced through lipid peroxidation and serves as a marker for oxidative damage in plant cell membranes induced by stress . The H 2 O 2 and MDA contents were greater in the roots and shoots of the sensitive genotype than in those of the tolerant plants in response to long-term salinity stress (Fig. a,b), which is an indication of greater oxidative stress damage in IR28. ROS-scavenging enzymes and antioxidants such as CAT and SOD play important roles in reducing oxidative stress , . In the present study, the results revealed that the tolerant genotype had greater CAT and SOD contents than that of the sensitive genotype in response to salinity stress, particularly under long-term exposure (Fig. c and d), suggesting that these enzymes play vital roles in ROS scavenging and alleviating stress. Furthermore, our findings revealed the expression of the key encoding genes of the antioxidant enzymes (Fig. ). Remarkably, linear regression analysis revealed that OsNCA1a and OsSOD-Fe had significant positive relationships with the contents of CAT and SOD enzymes, respectively (Fig. ). The GC-MS analysis revealed increased accumulation of AAs in both salt-stressed organs of the two genotypes (83.3%) in response to salinity. AAs act as osmolytes that maintain cellular turgor and protect molecules against damage caused by oxidative stresses through osmotic adjustment . In the present study, the accumulation of AAs increased in both organs after 54 h of salinity treatment, indicating that long-term salinity stress results in increased Na + accumulation and doubling of the role of the osmotic protection of AAs. The difference in AA accumulation between the two genotypes increased in both organs under long-term salinity stress. More AAs were detected in the roots of CSR28 than in those of IR28 at the 54-h timepoint (Table ), suggesting the specific role of the metabolic pathways of roots in promoting salinity tolerance. Proline, as one of the key primary metabolites, possesses antioxidant activity and protects macromolecules against ROS, along with playing the role of osmolyte and osmotic adjustment , . Proline accumulation is directly related to abiotic stress tolerance . Here, proline levels increased in response to salinity in both organs, genotypes and timepoints (Table ). The tolerant genotype CSR28 possessed greater potential for coping with osmotic challenges via proline accumulation in the shoots of CSR28 than in those of IR28. On the other hand, a significant increase in GABA was observed in response to salinity stress in the roots of the tolerant genotype CSR28 at the 6-h timepoint. GABA, a non-protein amino acid, quickly builds up in plants under stress conditions , helping to alleviate plant stress by regulating osmotic balance . In general, the results of the present study were in agreement with previous findings on the role of AAs in inducing the salinity tolerance of rice at the seedling stage , . Numerous studies have shown that genes related to proline biosynthesis are up-regulated under salt stress , . This study showed that the genes OsP5CS2 , OsP5CR , and OsP5CS1 were up-regulated in response to salinity stress under most of the experimental conditions (Fig. ). However, a significant gene-metabolite relationship was observed between the expression of the OsP5CS2 gene and the content of proline (Fig. ); therefore, this gene is considered to play a key role in increasing proline under salinity stress. The overexpression of P5CS (pyrroline-5-carboxylate synthetase 5) could increase the proline content in potato and rice and enhanced salt tolerance of plants. Furthermore, p5cs1-4 mutants exhibited strongly impaired proline accumulation in response to NaCl, suggesting that P5CS1 contributes greater to stress-induced proline accumulation . Sugars and sugar alcohols act as osmolytes and antioxidants, in addition to being resources for metabolism and structural support , . Raffinose increased in response to salinity under all conditions, especially in roots, where its maximum accumulation was observed at the 54-h timepoint in CSR28 roots (Table ). Nishizawa et al. reported that galactinol and raffinose protect plant cells against oxidative stress by scavenging hydroxyl radicals. Myo-inositol accumulated more in the roots of CSR28 under long-term salinity than in those of IR28. Using external myo-inositol in Malus hupehensis Rehd under salinity stress prevented the damage caused by salt accumulation through the support of the plant antioxidant defense system, Na + and K + ion homeostasis and osmotic balance . IMP (L-myo-inositol monophosphatase) is a key enzyme in the last process of myoinositol biosynthesis. The present study revealed a significant correlation between the myoinositol content and the gene expression of OsIMP in response to salinity stress (Fig. ). It has been reported that the overexpression of OsIMP in transgenic tobacco led to elevated inositol levels and improved cold tolerance by regulating antioxidant enzymes . Based on the assessment of primary metabolite data, although most OAs (61.8%) decreased in both salt-stressed organs, the value and pattern of their accumulation differed among organs, genotypes and timepoints. The lower reduction in OAs in the roots of both genotypes under long-term salinity stress could be due to the compensation of ionic imbalance . Increasing the amount of citrate and isocitrate anions affects the maintenance of the ionic balance caused by the excessive entrance of the toxic cations of Na + . In addition, the accumulation of OAs in roots can play a role in osmotic adjustment. The results of the present study were consistent with those of Zhao et al. . Our study showed that three genes involved in glycolate metabolism were up-regulated in the roots of CSR28 in response to salinity stress (Fig. ). Glycolate oxidase (GLO) is a key enzyme for photorespiratory metabolism in plants. The overexpression of four GLO-encoding genes has been shown in rice transgenic lines to enhance photosynthesis under conditions of high light and high temperature. Furthermore, H 2 O 2 , which can serve as a signaling molecule, was induced upon GLO overexpression . Since, H 2 O 2 and GLO were induced in the present study, we hypothesized that stress defense responses were triggered by the signaling function of H 2 O 2 cooperated with GLO gene expression. In this study, the impact of high salinity on rice genotypes was investigated at the seedling stage. The tolerant genotype (CSR28) exhibited better salt tolerance than the sensitive genotype (IR28). The osmoprotectants such as AAs and sugars increased, while OAs decreased in response to salinity stress. Strong correlations were observed between key genes and important compounds such as proline, myoinositol, CAT, and SOD under salt stress. This study highlighted the importance of gene expression and metabolomics data for understanding salt tolerance mechanisms and identified potential biomarkers for developing new salt-tolerant rice varieties. Plant materials and growth conditions Seeds of two rice ( Oryza sativa L. ssp. Indica ) genotypes with varying salt tolerances were procured from the International Rice Research Institute (IRRI) in the Philippines. The sensitive genotype IR28 was developed at the IRRI, while the tolerant genotype CSR28 (IR51485-AC6534-4) was developed at the Central Soil Salinity Research Institute (CSSRI) in Karnal, India. The plants were cultivated hydroponically in the greenhouse at Heinrich-Heine-University (HHU) in Düsseldorf, Germany. Initially, the seeds were treated with 2.5% sodium hypochlorite for sterilization and then germinated at 28 °C in the absence of light. Subsequently, the seedlings were transplanted into 4-liter pots containing Yoshida culture medium and were grown under a light regime of 14 h light and 10 h dark at a temperature of 28 ± 2 °C. The culture medium at a pH of 5.5 was replaced every 3 days. After 2 weeks, the seedlings were subjected to 150 mM (15 dS/m) NaCl. The roots and shoots of both the untreated and salt-treated plants were collected at 6 h, 54 h, and 1 week after salt treatment. Phenotypic evaluations of salinity tolerance To evaluate the salinity tolerance of the IR28 and CSR28 genotypes, the length and the fresh and dry weights of roots and shoots, the leaf relative water content (RWC) and the salinity tolerance scores were assessed (in three replications of five seedlings each) 1 week after 150 mM salt treatment. Root and shoot dry weight Dry weight was determined after placing the samples in a 72 °C oven for 48 h. Leaf RWC Leaf RWC was calculated for the youngest fully developed leaves with the following equation. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{RWC}}\; =\left({\text{FW}} - {\text{DW}}\right)/\left({\text{TW}} - {\text{DW}}\right)\times 100$$\end{document} where FW, DW and TW represent the fresh, dry, and turgid weights, respectively. Salt score 20 seedlings subjected to salinity treatment for 1 week were used to score the salinity tolerance of genotypes based on the method of Gregoria et al. in which 1, 3, 5, 7 and 9 refer to very tolerant (normal growth), tolerant (relatively normal growth), relatively tolerant (delayed growth), sensitive (completely stopped growth) and very sensitive (death of all plants), respectively. Determination of H 2 O 2 and MDA contents and antioxidant enzyme activity The H 2 O 2 and malondialdehyde (MDA) contents act as ROS and are indicators of stressful environments. The H 2 O 2 content in the root and shoot samples was measured using the method described by Ghiazdowska et al. . The method of Heath and Pacher was used to measure MDA as measure of lipid peroxidation. Catalase (CAT) and superoxide dismutase (SOD) are essential antioxidant enzymes that are required for ROS scavenging when plants experience salt stress. The CAT and SOD contents were measured according to the methods described by Scebba et al. and Giannopolitis , respectively. Metabolite profiling The topmost parts of the plants were harvested (from five replications of 10 plants each) after 6 h and 54 h of salinity treatment, shock-frozen in liquid nitrogen and stored at 80 °C until further processing. The samples were ground in a mortar and freeze-dried. 10 mg of lyophilized material were extracted with 1.5 ml of a water, methanol and chloroform (1:2.5:1, v/v) mixture including 5 µM ribitol as an internal standard and stored at − 20 °C. GC–MS analysis was conducted using protocols adapted from Lisec et al. and Gu et al. , as described previously by Shim et al. . For relative quantification, the peak areas of all metabolites were normalized against sample weight and the peak area of the internal standard ribitol, which was added prior to extraction buffer. Quantitative real-time PCR (qRT-PCR) analysis qRT-PCR analysis was used to evaluate the expression of key genes encoding metabolites and antioxidant enzymes. Total RNA from the control and stressed samples was extracted with a P-Biozol kit (manufactured by Bio Flux-Bioer, Tokyo, Japan). Spectrophotometry and agarose gel electrophoresis were used to determine the quantity and quality of the extracted RNA after DNaseI treatment. cDNA was synthesized from 1 µg of total RNA by a cDNA reverse transcription kit (Applied Biosystems, California, USA), according to the manufacturer’s protocol. The primers (Table ) were designed with Primer Express v3.0 software (Applied Biosystems, Foster City, CA). qRT-PCR analysis of three biological and two technical replicates was performed with an iCycler iQ5 thermocycler (Bio Rad Company) and SYBR Green I (SBP, Iran). All reactions were performed with the default parameters. The expression level of each gene was normalized to that of the internal control gene, elongation factor 1 alpha ( OseEF-1a ). The method of 2 −ΔΔCT and log 2 fold change (FC) were used to calculate the relative expression as a salinity/control ratio. The statistical significance of the ratios was considered to be │Log2 FC│≥ 1 and P value ≤ 0.05 (as calculated by Student’s t -test). Data analysis The salinity tolerance of the two genotypes was assessed using phenotypic and physiological data through Student’s t-test ( P value ≤ 0.05). The ROS and antioxidant contents and metabolite profiles were analyzed as factorial in a completely randomized design (CRD), and the significance level was tested using ANOVA in SAS v9.2 software. Relative metabolite abundances in roots and shoots of the two genotypes were compared at two timepoints based on the ratio of salinity/control, and their significance was determined using Student’s t-test. Furthermore, the means were compared through Duncan’s Multiple Range test ( P value ≤ 0.05). MeV v4.9.0 (Multiple Experiment Viewer) software was used for heatmap and cluster analysis. We also used the “cor” function in R to calculate the Pearson correlation coefficient of metabolites with a threshold greater than 0.80 and a P value < 0.05. A linear regression analysis through R was performed to determine whether there was a significant relationship ( P value ≤ 0.05) between metabolites and antioxidant enzymes with their corresponding coding genes. Seeds of two rice ( Oryza sativa L. ssp. Indica ) genotypes with varying salt tolerances were procured from the International Rice Research Institute (IRRI) in the Philippines. The sensitive genotype IR28 was developed at the IRRI, while the tolerant genotype CSR28 (IR51485-AC6534-4) was developed at the Central Soil Salinity Research Institute (CSSRI) in Karnal, India. The plants were cultivated hydroponically in the greenhouse at Heinrich-Heine-University (HHU) in Düsseldorf, Germany. Initially, the seeds were treated with 2.5% sodium hypochlorite for sterilization and then germinated at 28 °C in the absence of light. Subsequently, the seedlings were transplanted into 4-liter pots containing Yoshida culture medium and were grown under a light regime of 14 h light and 10 h dark at a temperature of 28 ± 2 °C. The culture medium at a pH of 5.5 was replaced every 3 days. After 2 weeks, the seedlings were subjected to 150 mM (15 dS/m) NaCl. The roots and shoots of both the untreated and salt-treated plants were collected at 6 h, 54 h, and 1 week after salt treatment. To evaluate the salinity tolerance of the IR28 and CSR28 genotypes, the length and the fresh and dry weights of roots and shoots, the leaf relative water content (RWC) and the salinity tolerance scores were assessed (in three replications of five seedlings each) 1 week after 150 mM salt treatment. Root and shoot dry weight Dry weight was determined after placing the samples in a 72 °C oven for 48 h. Leaf RWC Leaf RWC was calculated for the youngest fully developed leaves with the following equation. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{RWC}}\; =\left({\text{FW}} - {\text{DW}}\right)/\left({\text{TW}} - {\text{DW}}\right)\times 100$$\end{document} where FW, DW and TW represent the fresh, dry, and turgid weights, respectively. Salt score 20 seedlings subjected to salinity treatment for 1 week were used to score the salinity tolerance of genotypes based on the method of Gregoria et al. in which 1, 3, 5, 7 and 9 refer to very tolerant (normal growth), tolerant (relatively normal growth), relatively tolerant (delayed growth), sensitive (completely stopped growth) and very sensitive (death of all plants), respectively. 2 O 2 and MDA contents and antioxidant enzyme activity The H 2 O 2 and malondialdehyde (MDA) contents act as ROS and are indicators of stressful environments. The H 2 O 2 content in the root and shoot samples was measured using the method described by Ghiazdowska et al. . The method of Heath and Pacher was used to measure MDA as measure of lipid peroxidation. Catalase (CAT) and superoxide dismutase (SOD) are essential antioxidant enzymes that are required for ROS scavenging when plants experience salt stress. The CAT and SOD contents were measured according to the methods described by Scebba et al. and Giannopolitis , respectively. The topmost parts of the plants were harvested (from five replications of 10 plants each) after 6 h and 54 h of salinity treatment, shock-frozen in liquid nitrogen and stored at 80 °C until further processing. The samples were ground in a mortar and freeze-dried. 10 mg of lyophilized material were extracted with 1.5 ml of a water, methanol and chloroform (1:2.5:1, v/v) mixture including 5 µM ribitol as an internal standard and stored at − 20 °C. GC–MS analysis was conducted using protocols adapted from Lisec et al. and Gu et al. , as described previously by Shim et al. . For relative quantification, the peak areas of all metabolites were normalized against sample weight and the peak area of the internal standard ribitol, which was added prior to extraction buffer. qRT-PCR analysis was used to evaluate the expression of key genes encoding metabolites and antioxidant enzymes. Total RNA from the control and stressed samples was extracted with a P-Biozol kit (manufactured by Bio Flux-Bioer, Tokyo, Japan). Spectrophotometry and agarose gel electrophoresis were used to determine the quantity and quality of the extracted RNA after DNaseI treatment. cDNA was synthesized from 1 µg of total RNA by a cDNA reverse transcription kit (Applied Biosystems, California, USA), according to the manufacturer’s protocol. The primers (Table ) were designed with Primer Express v3.0 software (Applied Biosystems, Foster City, CA). qRT-PCR analysis of three biological and two technical replicates was performed with an iCycler iQ5 thermocycler (Bio Rad Company) and SYBR Green I (SBP, Iran). All reactions were performed with the default parameters. The expression level of each gene was normalized to that of the internal control gene, elongation factor 1 alpha ( OseEF-1a ). The method of 2 −ΔΔCT and log 2 fold change (FC) were used to calculate the relative expression as a salinity/control ratio. The statistical significance of the ratios was considered to be │Log2 FC│≥ 1 and P value ≤ 0.05 (as calculated by Student’s t -test). The salinity tolerance of the two genotypes was assessed using phenotypic and physiological data through Student’s t-test ( P value ≤ 0.05). The ROS and antioxidant contents and metabolite profiles were analyzed as factorial in a completely randomized design (CRD), and the significance level was tested using ANOVA in SAS v9.2 software. Relative metabolite abundances in roots and shoots of the two genotypes were compared at two timepoints based on the ratio of salinity/control, and their significance was determined using Student’s t-test. Furthermore, the means were compared through Duncan’s Multiple Range test ( P value ≤ 0.05). MeV v4.9.0 (Multiple Experiment Viewer) software was used for heatmap and cluster analysis. We also used the “cor” function in R to calculate the Pearson correlation coefficient of metabolites with a threshold greater than 0.80 and a P value < 0.05. A linear regression analysis through R was performed to determine whether there was a significant relationship ( P value ≤ 0.05) between metabolites and antioxidant enzymes with their corresponding coding genes. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 |
Diagnostic Performance of Plasma P‐tau217, NfL, and GFAP for Predicting Alzheimer’s Disease Neuropathology Across Diverse Neurodegenerative Syndromes | 53615c23-b531-416f-880d-69e56095351d | 11715677 | Forensic Medicine[mh] | |
Preclinical training of future ocular surgeons: a French opinion-based study | e4408a34-600b-41fa-b6f0-2bab0f37e748 | 10858601 | Ophthalmology[mh] | Ophthalmology learning concomitantly associates theorical workout, clinical and surgical training. Not only cognitive loads participate in the know-how of specialty but also technical dexterity and mechanical sense, which is supposed to develop all along the initial training . Medical training relies primarily on knowledge made of memory recalling, followed by clinical application, which generates procedural memory. Both are thereafter applied on patients . In that, surgical skills extend far beyond intellectual knowledge. Ocular surgery is particularly demanding because each procedure step shapes ineluctably the next one and can never be processed twice. To ensure learning success for in-training surgeons, some supplemental background should be acquired as compared to exclusively medical specialties. For instance, students should master every technical detail and step of ocular procedures, practice with ease using both hands and feet, anticipate common reactions of biological tissues, accustom themselves to complex operating devices, and finally manipulate intraocular prosthesis and biomaterials. Furthermore, ophthalmologists operate under operating microscopes as well as complex visualization systems, including augmented reality. Finally, they should master surgical non-technical skills. Before proceeding on patients in real-life, self-confidence is mandatory and implies a significant commitment ahead of training. In such a context, simulation seems an interesting option. It is positively correlated with surgical dexterity of resident and junior surgeons in real-life ocular surgery . Practically, two fields of surgical training are usually associated: dry- or wetlab simulation, and hands-on training. In most French cities, simulation has been set available throughout the last decade, using virtual reality simulators (e.g.: EyeSi surgical , Haag-Streit ; Germany) or basic surgical kits (e.g.: Kitaro kits , FCI Ophthalmics ; MA, USA). Teaching program have been designed for simulation and were incorporated within residents training programs. Concurrently, senior surgeons have supervised junior surgeons in their hands-on training, at least while performing surgery on real patients. The way to emancipate future ocular surgeons should ideally lead them to progress from simulating surgery to hands-on training, under educational master supervision. In the end, in-training surgeons should be evaluated for surgical dexterity, ideally in a reproducible and impartial fashion. Such attributes are not entirely fulfilled solely by senior surgeon’s opinion. The contribution of objective scoring provided by simulators could dramatically help. Still, it remains optional in many European Union countries, including France. Worldwide, Ophthalmology residents seem generally satisfied by their surgical training . Recently, residents of ophthalmology from Paris reported a good satisfaction level toward the surgical side of their training program. At most, some of them suggested to further improve access to simulation and hands-on labs. Some others claimed being ill-prepared to ocular surgery in emergency eye-care . But in fact, little is known about the global opinion of residents in ophthalmology and how they would rate the surgical program they are enrolled in. Not much data are available to compare resident’s feedback across regions within a single country. We elaborated a dedicated questionnaire to address the question. We gathered opinions about surgical training programs during residency. We allowed spontaneous suggestions from residents regarding any possible improvement in the surgical program. We sent the questionnaire to all students enrolled in a French residency program of ophthalmology and are presenting the answers so collected. The aim of the present report is to present the opinion of French residents in ophthalmology toward their own surgical training. Ophthalmology residency training in France Residency programs in France last 6 years, divided in 3 phases: First phase (“phase socle”) : 1 year dedicated to learning the basic clinical skills in Ophthalmology as well as first simulation training sessions; a surgical simulation exam is offered in some regions at the end of this first year; Second phase (“phase d’approfondissement”) : 3 years of in-depth learning in which the resident is actively involved in patient care, with night calls, increased responsabilities in the operating room in the presence of a tutor; Third phase (“phase de consolidation”) : 2 final years of learning consolidation, in which the resident has increasing autonomy within patient care, the last year being more similar to fellowship. After residency, one or two years are required as a fellow in order to be an Ophthalmologist and pursue a career in private practice or within a hospital. No objective list of surgical procedures or skills is required at the end of residency. Surgical simulation is mandatory in some regions, optional in others and not available in a few regions. There are no national guidelines regarding simulation requirements during residency, although centers offering simulation generally have a passing score of 400 on the EyeSI simulator . Questionnaire inception We sent a questionnaire to residents enrolled in a French ophthalmology residency program. Residents where exhaustively identified across the 27 French regions, based on the public list of French residents, annually released by the Official Journal of the French Republic ( Journal Officiel de la République Française ). We managed to contact residents by email through mailing lists, by telephone and through social media platforms. We generated an online questionnaire using the software Google Forms®. The residents were given the opportunity to take the survey by following the questionnaire’s URL. The respondents had to be enrolled in a French residency program. The seniority could range from first to last year of residency. The questionnaire form consisted in 27 successive questions. It was meant to propose simple/multiple-choice or open-ended questions. The first part inquired about the geographical location of the residency program (city) and the starting year (i.e. first year of the program). Next parts successively asked to residents rating from 0 to 10 their own surgical training program, to attribute the ideal proportion to simulation or hands-on training during their training, and whether or not they would identify a personal mentor, a list of items (goals & objectives) to achieve before the end of the program and should comply to a formal program of simulation (e.g. simulation clerkship for drylabs and/or wetlabs). We questioned whether simulation was an obligation to the teaching program. We inquired about the type of simulated surgery (stitching, incisions, cataract extraction, keratoplasty, filtering or vitreoretinal surgery, etc.). They should rate from 0 to 10 both drylabs and wetlabs, and whether they could attend to it. We also gathered the delay to complete their very first surgical procedure on a real patient, cataract extraction and vitreoretinal surgery. In the following part, we asked to self-evaluate their surgical autonomy after 4 years of residency (before the third phase of residency) and after the 6-years residency (after having completed the third phase). In the last part, some free comments could be provided regarding surgical training and teaching. Wishes could be formed for further improvement. This study was approved by the Ethics Committee of the French Society of Ophthalmology (IRB 00008855 Société Française d’Ophtalmologie IRB#1). All methods were carried out in accordance with relevant guidelines and regulations. All experimental protocols were approved by by the Ethics Committee of the French Society of Ophthalmology. Informed consent was obtained from all subjects and/or their legal guardian(s). Group of residents Residency programs were grouped according to French regions for analysis as follows: Paris (Paris city, Ile-de-France); North (Amiens, Angers, Besançon, Brest, Caen, Dijon, Lille, Nancy, Nantes, Poitiers, Reims, Rennes, Rouen, Strasbourg, Tours); South (Bordeaux, Clermont-Ferrand, Grenoble, Limoges, Lyon, Marseille, Montpellier-Nîmes, Nice, Saint-Etienne, Toulouse); Overseas (Antilles-Guyane). The term « other regions » referred to the pooled data from North, South and Overseas regions. Statistical analysis We described continuous variables by means and standard deviations, and we compared them with a Student’s t-test, after the data’s distribution was verified for normality with a Shapiro-Wilk test. A one-way analysis of variance (ANOVA) test was used to compare continuous variables if their number was superior to 2 variables. We proceeded with categorical variables by percentages and compared them with a Chi-square test when required. All tests were bilateral, and we considered a p -value < 0.05 as statistically significant. Statistical analyses were performed with the Statistical Analysis System® (SAS v9.4) and figures were built using Microsoft Excel® software. Residency programs in France last 6 years, divided in 3 phases: First phase (“phase socle”) : 1 year dedicated to learning the basic clinical skills in Ophthalmology as well as first simulation training sessions; a surgical simulation exam is offered in some regions at the end of this first year; Second phase (“phase d’approfondissement”) : 3 years of in-depth learning in which the resident is actively involved in patient care, with night calls, increased responsabilities in the operating room in the presence of a tutor; Third phase (“phase de consolidation”) : 2 final years of learning consolidation, in which the resident has increasing autonomy within patient care, the last year being more similar to fellowship. After residency, one or two years are required as a fellow in order to be an Ophthalmologist and pursue a career in private practice or within a hospital. No objective list of surgical procedures or skills is required at the end of residency. Surgical simulation is mandatory in some regions, optional in others and not available in a few regions. There are no national guidelines regarding simulation requirements during residency, although centers offering simulation generally have a passing score of 400 on the EyeSI simulator . We sent a questionnaire to residents enrolled in a French ophthalmology residency program. Residents where exhaustively identified across the 27 French regions, based on the public list of French residents, annually released by the Official Journal of the French Republic ( Journal Officiel de la République Française ). We managed to contact residents by email through mailing lists, by telephone and through social media platforms. We generated an online questionnaire using the software Google Forms®. The residents were given the opportunity to take the survey by following the questionnaire’s URL. The respondents had to be enrolled in a French residency program. The seniority could range from first to last year of residency. The questionnaire form consisted in 27 successive questions. It was meant to propose simple/multiple-choice or open-ended questions. The first part inquired about the geographical location of the residency program (city) and the starting year (i.e. first year of the program). Next parts successively asked to residents rating from 0 to 10 their own surgical training program, to attribute the ideal proportion to simulation or hands-on training during their training, and whether or not they would identify a personal mentor, a list of items (goals & objectives) to achieve before the end of the program and should comply to a formal program of simulation (e.g. simulation clerkship for drylabs and/or wetlabs). We questioned whether simulation was an obligation to the teaching program. We inquired about the type of simulated surgery (stitching, incisions, cataract extraction, keratoplasty, filtering or vitreoretinal surgery, etc.). They should rate from 0 to 10 both drylabs and wetlabs, and whether they could attend to it. We also gathered the delay to complete their very first surgical procedure on a real patient, cataract extraction and vitreoretinal surgery. In the following part, we asked to self-evaluate their surgical autonomy after 4 years of residency (before the third phase of residency) and after the 6-years residency (after having completed the third phase). In the last part, some free comments could be provided regarding surgical training and teaching. Wishes could be formed for further improvement. This study was approved by the Ethics Committee of the French Society of Ophthalmology (IRB 00008855 Société Française d’Ophtalmologie IRB#1). All methods were carried out in accordance with relevant guidelines and regulations. All experimental protocols were approved by by the Ethics Committee of the French Society of Ophthalmology. Informed consent was obtained from all subjects and/or their legal guardian(s). Residency programs were grouped according to French regions for analysis as follows: Paris (Paris city, Ile-de-France); North (Amiens, Angers, Besançon, Brest, Caen, Dijon, Lille, Nancy, Nantes, Poitiers, Reims, Rennes, Rouen, Strasbourg, Tours); South (Bordeaux, Clermont-Ferrand, Grenoble, Limoges, Lyon, Marseille, Montpellier-Nîmes, Nice, Saint-Etienne, Toulouse); Overseas (Antilles-Guyane). The term « other regions » referred to the pooled data from North, South and Overseas regions. We described continuous variables by means and standard deviations, and we compared them with a Student’s t-test, after the data’s distribution was verified for normality with a Shapiro-Wilk test. A one-way analysis of variance (ANOVA) test was used to compare continuous variables if their number was superior to 2 variables. We proceeded with categorical variables by percentages and compared them with a Chi-square test when required. All tests were bilateral, and we considered a p -value < 0.05 as statistically significant. Statistical analyses were performed with the Statistical Analysis System® (SAS v9.4) and figures were built using Microsoft Excel® software. We reached by email or social media a total of 1057 residents. Among them, 321 answered the questionnaire, accounting for a global responding rate of a third (30.3%). Residents registered in a specific transversal program (subspecialty training within residency in France, available in Oculoplastics and Pediatric Ophthalmology) accounted for 6.2%. The proportion of respondents distributed homogeneously for seniority (Table ). Answers converged from the 27 French regions (Fig. ). A fourth of the answers went from residents of Paris (17%) and Lyon (10%), the two most populated cities in France, followed by Lille (8%) and Bordeaux (7%). Respondents attributed a mean score of 5.27 ± 2.4/10 to the surgical training program they were attending to. It included simulations with drylabs wetlabs and hands-on training. The mean score showed great variability depending on the region (Fig. ). Regardless of the nature of the rendered ocular surgery, simulation was accessed by all residents in Paris and by 78.1% in other regions ( p < 0.005). For cataract surgery, the global subjective rate was 7.31 ± 1.89/10 among 186 residents for drylabs and 6.39 ± 2.1/10 among 311 residents for wetlabs training, all simulated surgery taken together. Answers to binary (yes/no) questions are shown in Table . A small majority of respondents ( n = 193/321; 60.1%) declared to participate to drylabs and/or wetlabs on an optional basis, while more than a quarter ( n = 113/321; 35.2%) shall attend to simulation program as a formal part of the residency, through skills lab for example. These skills labs were mainly training for cataract surgery and stitching. Only residents from the overseas region of Antilles-Guyane did not have any access to simulation. A large majority of residents already had performed at least a single step of an ocular surgery on a real patient (Total n = 300/321; 93.4%; Paris n = 49/56; 87.5%; other regions n = 251/265; 94.7%, p = 0.047), and n = 259/321 (80.7%) out of them claimed having completed a whole cataract extraction procedure (Paris n = 47/56; 83.9%, other regions n = 212/265; 80%, p = 0.498). On average, the first cataract surgery was completed in its entirety by the end of the third semester (Total: 3.8 ± 1.9 semesters; Paris: 2.6 ± 1.4 semesters; regions: 4.05 ± 1.96 semesters, p < 0.0001). Meanwhile, 26.8% ( n = 86/321) of the respondents had performed at least a procedural step of vitreo-retinal surgery (Paris n = 25/56; 44.6%, other regions n = 61/265; 23%; p = 0.00005), on average at the beginning of the sixth semester (Total: 6.38 ± 1.96 semesters; Paris: 5.2 ± 1.9 semesters; regions: 6.97 ± 1.6 semesters, p = 0.0003). Globally, less than a half of respondents accessed to simulation for vitreoretinal surgery (44.2%). A higher rate of access was claimed by 76.8% of the resident in Paris, compared to 37.4% in other regions ( p < 0.00001). Less than a ¼ of respondents accessed to training kits (usually KITARO ® kit) during residency, which utility was rated 5.6/10. Almost a half of the residents (48.9%) were able to identify a senior mentor dedicated to their surgical training, more likely in Paris (Paris n = 35/56; 62.5% vs. other regions n = 101/265; 38.1%, p = 0.00079). Residents accorded themselves in the final comments on the need to benefit from a senior mentoring all along the surgical teaching program. Unsurprisingly, 82.2% of respondents nationwide could not clearly define objectives related to the surgical training program they belong to. In this regard, no significant difference was reported between Paris and other regions ( p = 0.77). Before reaching surgical self-autonomy, only 58 respondents (18%) would balance the surgical training between simulation and hands-on training in a 50:50 proportion, while a quarter considered 30:70 as optimal. Most of the other respondents ( n = 233; 72.58%) suggested favoring hands-on over simulation along residency (Fig. ). At the same time, hands-on surgical training during residency was given an overall score of 5.5 ± 2.6/10. The surgical hands-on training rate suffered from great disparities between Paris and other regions (Paris: 7.1 ± 2.2/10, Regions: 5.2 ± 2.5/10, p < 0.0001). Residents foresaw being surgically independent significantly more by the end of the third phase than by the end of the second phase ( n = 287/321; 89.4% vs. n = 184/321; 57.3%, p < 0.00001). We noticed a discrepancy between Paris and other regions at the end of the 8th semester only (by the end of second phase semester: Paris n = 45/56; 80.4%; other regions n = 139/265; 52.5%; p = 0.000125; 10th semester: Paris n = 53/56; 94.6%; other regions n = 234/265; 88.3%; p = 0.16). Some of the residents mentioned an easier access to hands-on surgery in private structures, mostly residents outside the Paris (Île-de-France) region. Others reported that suburban hospitals allowed easier access to surgery compared to their related university center. They commented on the lack of national standardization, as well as the need for more formal surgical aims and senior mentoring. They would suggest a mentoring at least by the 3rd month of residency. Naturally, they would welcome a better access to surgery on real patients. They expressed some regrets about the lack of evaluation of teaching programs and suggested that the quality of surgical pedagogy should be evaluated by residents through a formal rating. They valued more videotaped surgical courses provided by expert surgeons. They would further welcome a pedagogical debriefing based on their own videotaped surgical performances. The need for more theory in a top-down style was not approached. In the present survey, residents seemed to favor experiential practice based on simulation, followed by hands-on training. Access to simulation followed by a transition towards hands-on surgical training was of strong demand from residents who didn’t access to it. Other comments also complained about the too selective access during surgical training to subspecialty, such as corneal, glaucoma, or vitreoretinal surgery. The satisfaction of residents in ophthalmology towards surgical training is encouraging and globally positive throughout France. Residents valued simulation training. It seems in line with what has been reported from the specific area of Paris-Île-de-France by Martin et al. With a third of responders among the total of contacted residents, the collected declarative data can be interpreted as representative in the present study. It also corroborates that all French ophthalmology residency programs were participating in the study. At a national level, we observed significant disparities between Paris and other regions as well as between regions themselves. It reflects a poor level of standardization nationwide for surgical training programs. According to the provided answers, the access itself to surgical training suffers from high variability in quality and for quantitative availability. We did not focus on the causes of such disparities. The lack of standardization at the national level has been reported by other international studies . It is likely that the lack of a clear national standardization for surgical training, for example through surgical goals & skills textbooks, might play a role. Rating the objectives of surgical learning programs has been proposed using the method of SMART (specific, measurable, achievable, reasonable, and time-bound) goal framework . Recent studies have evaluated residency programs at the European level . They determined the minimal number of procedures to be performed for each type of surgery during residency. Surgical volume is an important metrics to approach the residents’ ease to perform a specific procedure, but the ability to operate should also be determined by a senior surgeon. Learning quality of proceeding and other non-technical skills is also pivotal. Residents could take advantage of a textbook of goals & objectives. It would serve as a tool to design surgical supervision. It would also contribute to prove surgical achievements, thus putting senior surgeons more at ease to let residents operate. Such a tool has already been discussed in the literature, and can further complete a surgeon’s certification (operating license). At the same time, it would credit residents for more access to the operating room by endorsing the role of leading operator for instance. In the United States, the ACGME (Accreditation Council for Graduate Medical Education) cleared guidelines for teaching during residency. It is in charge to enforce compliance to guidelines for residency programs. Residents are also interrogated annually, through a questionnaire, which figures their satisfaction with the residency program . However, whether such feedback could be meaningful and even elaborated remains to be determined in countries, either poorly allocated for teaching programs or less centrally coordinated. Responding residents globally described a poor access to subspecialty surgical training, in accordance with previous data . The opportunity to practice as a subspecialist is limited compared to comprehensive ophthalmology. The more specialized the practice is, the tougher is teaching complexity. Besides, less patients are referred to subspecialists. Logically, subspecialized practice is sparsely accessed during fellowships, even potentially at a senior level. Although residents are complaining about it when interviewed, less surgeons are needed in the field. As a matter of fact, only a few future surgeons should be specifically trained. It seems then acceptable that career history of excellence should rule access to it. Our questionnaire did not include questions regarding surgery in emergency situations, such as identifying dystopic anatomy and suturing recent open globe injuries, but we postulate that the same approach could apply for such complex procedures. Surgical teaching has progressively evolved from relying on the Halstedian model of graduate responsibility to surgery simulation as a preliminary step in the learning course . Higher standard for patient safety added to less teaching resources may have prompted the transition . Simulation now serves as a key element for transition towards hands-on surgical training. The benefit has been widely demonstrated in the past decade, either using the EyeSI simulator or throughout other wetlabs . However, our study enlightens regional disparities to access simulation (drylabs and wetlabs). In French regions, accessibility varies greatly, depending on the involvement of local universities and health agencies (ARS, Agences Régionales de Santé). As a matter of fact, not all regions have a simulation platform available. In the meantime, Paris region set dry- and wetlabs widely available to residents through virtual reality surgical simulators and in-training workshops, placing simulation as a mandatory part of the resident’s preclinical training. Obviously, we acknowledge several limitations in the present study. It is retrospectively designed. As a questionnaire optionally taken, all French residents could not be exhaustively interviewed. Nevertheless, we are grateful that a third of the residents took our questionnaire, which is meaningful for an opinion-based study. Answers were subjective. They may also reflect the lack of knowledge of residents on their access to simulation or surgery, especially among younger residents. It is possible that some respondents to the questionnaire sent answers twice, although this eventuality seems very unlikely, given the time consumed to fill such a questionnaire, among residents, who are dealing with busy schedules in clinical practice. We would have also detected identical charts in our database in such an occurrence. In conclusion, French Ophthalmology residents claimed satisfaction with the surgical training program they belong to, along with some regional disparities. The need for harmonization of surgical goals and objectives is underlined. The access to simulation was valued by residents, based on a progressive and supervised transition to surgical training on real patients. Residents would support the evaluation of surgical skills, which could serve residents as an “operating license”, attest of the specific surgical knowledge they acquired and prompt their access to real surgery mastered by seniors. According to residents in ophthalmology, the program they are enrolled in should be evaluated by themselves, to improve surgical teaching. Below is the link to the electronic supplementary material. Supplementary Material 1 |
Association between frequency of mass media exposure and maternal health care service utilization among women in sub-Saharan Africa: Implications for tailored health communication and education | eb09016e-5d6c-434a-a0f4-b89ca839dfc5 | 9522280 | Health Communication[mh] | There has been substantial improvement in the reduction of maternal mortality rates globally; however, sub-Saharan Africa (SSA) continues to possess a high rate of maternal mortality relative to the global front . The World Health Organization (WHO) reported that an estimated 810 pregnant women died daily in 2017, and 94% of all maternal deaths occur in developing countries . It has been widely reported that maternal health service is an important approach towards avoiding pregnancy-related complications and reducing maternal mortality in SSA . Maternal healthcare is the overall wellbeing of a woman from the time of pregnancy to after birth. Maternal healthcare utilization during the three critical stages (antenatal, birth, and postnatal) is very important, as it contributes largely to reducing maternal and infant mortality and morbidity . Antenatal care (ANC) encompasses all the routine care provided to pregnant women from conception to the onset of labor, and it helps to provide care for the prevention and management of existing and potential causes of maternal mortality and morbidity . The new WHO antenatal care model recommends that the first antenatal care visits take place during the first trimester (that is below 12 weeks of pregnancy), with additional 7 visits recommended . Antenatal care utilization has been reported to be key in ensuring an optimal health outcome for women and babies . Skilled birth attendance (SBA) refers to pregnant women seeking care from trained health professionals to provide healthcare to mothers and newborn babies before and during delivery to manage normal deliveries and, diagnose, manage, or refer obstetric complications . The use of traditional birth attendance (TBA) is predominant in most countries in SSA . However, TBA is not ideal as it leads to several complications, therefore recommending SBA which reduces birth complications and maternal mortality is in the right path . Postnatal care (PNC) is the care given to a mother and the newborn baby, immediately after the birth of the placenta and for the first 42 days of life . A larger proportion of maternal and neonatal mortality has been reported to occur during childbirth and the postnatal period, making it a critical period for the needed health care to be available and accessed . Care given at the PNC period helps health workers determine any post-delivery problems quickly and attend to them on time to prevent ill health, disability or death . Given that maternal health services are important in reducing maternal mortality and morbidity, it is important that these services are utilized at each of the critical stages. To utilize these services, awareness needs to be raised on their availability and effectiveness, and mass media can be a medium for such awareness and education on maternal health services availability, importance, and effectiveness . Mass media includes written broadcast, or spoken communication that reaches the public audience and serves as an important mechanism for societal integration . It is used to disseminate information to a large audience at a relatively faster rate and at a cheaper cost . Mass media promotes health through two key strategies. These strategies are by: (1) reaching a wide audience across different boundaries at the same time, and (2) exposing the public to specific messages that influence public belief, attitude, and behavior . Awareness creation through mass media has the potential to encourage positive behaviors and discourage negative health-related behaviors through direct and indirect pathways . Television and radio are the widely used media for creating awareness among a larger audience in SSA; nevertheless, print media such as magazine and newspaper, and outdoor media such as billboards and posters have also proven to be effective . Mass media is shown to be an effective medium of reaching mothers at a large scale to enhance their utilization of maternal health services, especially in developing countries . For example, women who read newspapers or reported watching television in Bangladesh were almost three times more likely to utilize a maternal health service . Another study from Uganda reported a positive impact of mass media on maternal health service utilization . The Sustainable Development Goals (SDG) 3.1 and 3.2 seek to reduce the global maternal mortality and end preventable deaths of newborn and under five children by 2030, respectively, with all countries targeting to reduce neonatal and under five mortality . These aims are supported by other global interventions such as the strategies towards ending preventable maternal mortality , and the Global strategy for women’s, children’s and adolescents’ health 2016–2030 . An important pillar for achieving these goals in SSA is the utilization of maternal health services. Women’s exposure to mass media (e.g., watching TV, reading a newspaper, listening to the radio, among others) can promote their utilization of maternal health services . Different types of mass media may have different associations with maternal health services utilization . Even though there have been some studies in SSA on the association between mass media and maternal health services utilization , there is limited literature on the association between the different types of mass media and maternal health services utilization at the SSA regional level. This study, therefore, aimed at assessing the association between the different types of mass media and maternal health services utilization among in SSA. Findings from this study could help fill an important gap in the literature on maternal health services utilization in SSA. Findings could also help in understanding the different types of mass media that can contribute to enhancing maternal health services utilization in SSA, which in turn will contribute to the reduction of maternal mortality rates in SSA and the achievement of SDG 3.1 and 3.2. Data source and study design Data from the recent Demographic and Health Surveys (DHS) conducted between 2010 and 2020 were used in this study. A total of 28 countries with a survey dataset within 2010–2020 were included in our study . The data was extracted from the women’s files of the 28 countries. DHS is a comparable nationally representative survey conducted in over 90 low-and-middle-income countries worldwide since its inception in 1984 . The survey adopted a cross-sectional design to collect data from the respondents. The respondents were sampled using a two-stage sampling technique with the detailed sampling methodology highlighted in the literature . The level one was women who had a pregnancy in the last five years preceding the survey and level two referred to the enumeration area or the cluster. DHS employed a structured questionnaire to collect the data on health and social indicators such as maternal health service utilization and exposure to mass media . In the present study, we included 199,146 women in level one and 1611 clusters in level two. The dataset used in the study can be freely accessed at https://dhsprogram.com/data/available-datasets.cfm . We used the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement guidelines to frame this study. Variables Outcome variables Three maternal health care service utilization variables (ANC, SBA, and PNC) were the outcome variables in this study. With ANC, the women were asked the number of antenatal visits they had during the recent pregnancy. The response was continuous and was recoded into ‘No (0–3 = 0)’ and ‘Yes (4 or more = 1)’. For SBA, the women who had assistance during delivery from qualified categories of health professionals were coded as having ‘assisted delivery = 1’ whilst the remaining women were grouped as ‘not having assisted delivery = 0’. Regarding PNC, the women were asked whether they had a baby postnatal check within 2 months after delivery. The response categories were ‘No’, ‘Yes’, and ‘Don’t know’. Those who responded ‘don’t know’ were dropped. We utilized the remaining responses ‘No = 0’ and ‘Yes = 1’ in the analysis. The response coding in this study was informed by previous studies . Exposure variables Frequency of listening to radio, frequency of watching television, and frequency of reading newspapers or magazines were the key explanatory variables. All three variables had the same response options. The options were ‘not at all’, ‘less than once a week’, ‘at least once a week’, and ‘almost every day’. For this study’s purpose, those that responded, ‘at least once a week’ and ‘almost every day’ were merged and recoded as “at least once a week” and used in the study. The final response categories used in each of the three exposure variables after the recoding were “0 = not at all; 1 = less than once a week; and 2 = at least once a week”. We based on literature to code and categorize the explanatory variables . Covariates The covariates included in this study were selected based on their significant association with the outcome variables as well as their availability in the DHS dataset . The variables were sectioned into individual-level factors (maternal age, educational level, religion, current working status, parity, health insurance coverage, marital status, getting medical help for self: Permission to go, getting medical help for self: distance to health facility, and getting medical help for self: getting money for treatment) and contextual factors (wealth index, place of residence, and geographical subregions). We maintained the coding for maternal age, educational level, current working status, health insurance coverage, getting medical help for self: Permission to go, getting medical help for self: distance to health facility, and getting money for treatment, wealth index, and place of residence as found in the DHS dataset. Marital status was recoded into 0 = never married; 1 = married; 2 = cohabiting; 3 = widowed; 4 = divorced; and 5 = separated. Religion was coded as 0 = Christianity; 1 = Islamic; 2 = African Traditional; 3 = No religion; and 4 = others. Parity was recoded into 0 = one birth; 1 = two births; 2 = three births; and 3 = four or more births. The 28 countries used in this study were grouped into their geographical subregions and were coded as 0 = Southern Africa; 1 = Central Africa; 2 = Eastern Africa; and 3 = Western Africa. Statistical analyses We first extracted the data from the individual women’s files in the 28 countries and appended it for analysis. The data was cleaned, and all missing observations were dropped. Only the countries with the completed cases of variables of interest were included in the final analysis. First, percentages were used to present the results of the utilisation of the ANC, SBA, and PNC using a forest plot (Figs – ). We performed crosstabulation to determine the distribution of the outcome variables across the exposure variables and the covariates. Pearson’s chi-square test of independence was employed to determine the significant variables using the p-value ( p < 0.05). We employed the ‘best subset variable selection method’ to obtain the variables for the regression analysis. According to Lindsey and Sheather , the best subset variable selection method when performed enables the researcher with the best combinations of predictors for each level of model complexity. To perform this, we used the Stata command ‘gvselect’ together with all the covariates to determine which set of covariates to include in the regression model. The output of the best selection methods included log-likelihood, and Akaike’s information criterion (AIC). We selected the set of variables with the lowest AIC for this study. To determine the influence of different types of mass media variables on ANC, SBA, and PNC, a multilevel logistic regression was adopted and modelled in three steps. Model 0, I, and II were fitted to include the outcome variable, key explanatory variables only, and key explanatory variables and covariates from the best selection method respectively. The rest of the AIC was used to test for the model fitness and comparison. Adjusted odds ratio (aOR) with their 95% confidence intervals (CIs) were used to present the results of the regression analysis in a tabular form. Furthermore, the intraclass correlation coefficient, and the variance component is reported. The women’s sample weight (v005/1,000,000) was applied in all analyses to alleviate biased estimates based on the DHS guidelines. Also, we used the survey set ‘svy’ command in Stata to adjust for the complex sampling technique employed by the DHS in all the analysis. Statistical significance was set at p-value less than 0.05. Stata software version 16.0 was used to perform the analysis. Ethical consideration The study required no ethical clearance because the DHS dataset is freely available in the public domain. Prior permission to use the dataset was sought from the MEASUREDHS. We also adhered to ethical guidelines in the use of secondary dataset for publication. Detailed information about the DHS data usage and ethical standards are available at http://goo.gl/ny8T6X . Data from the recent Demographic and Health Surveys (DHS) conducted between 2010 and 2020 were used in this study. A total of 28 countries with a survey dataset within 2010–2020 were included in our study . The data was extracted from the women’s files of the 28 countries. DHS is a comparable nationally representative survey conducted in over 90 low-and-middle-income countries worldwide since its inception in 1984 . The survey adopted a cross-sectional design to collect data from the respondents. The respondents were sampled using a two-stage sampling technique with the detailed sampling methodology highlighted in the literature . The level one was women who had a pregnancy in the last five years preceding the survey and level two referred to the enumeration area or the cluster. DHS employed a structured questionnaire to collect the data on health and social indicators such as maternal health service utilization and exposure to mass media . In the present study, we included 199,146 women in level one and 1611 clusters in level two. The dataset used in the study can be freely accessed at https://dhsprogram.com/data/available-datasets.cfm . We used the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement guidelines to frame this study. Outcome variables Three maternal health care service utilization variables (ANC, SBA, and PNC) were the outcome variables in this study. With ANC, the women were asked the number of antenatal visits they had during the recent pregnancy. The response was continuous and was recoded into ‘No (0–3 = 0)’ and ‘Yes (4 or more = 1)’. For SBA, the women who had assistance during delivery from qualified categories of health professionals were coded as having ‘assisted delivery = 1’ whilst the remaining women were grouped as ‘not having assisted delivery = 0’. Regarding PNC, the women were asked whether they had a baby postnatal check within 2 months after delivery. The response categories were ‘No’, ‘Yes’, and ‘Don’t know’. Those who responded ‘don’t know’ were dropped. We utilized the remaining responses ‘No = 0’ and ‘Yes = 1’ in the analysis. The response coding in this study was informed by previous studies . Exposure variables Frequency of listening to radio, frequency of watching television, and frequency of reading newspapers or magazines were the key explanatory variables. All three variables had the same response options. The options were ‘not at all’, ‘less than once a week’, ‘at least once a week’, and ‘almost every day’. For this study’s purpose, those that responded, ‘at least once a week’ and ‘almost every day’ were merged and recoded as “at least once a week” and used in the study. The final response categories used in each of the three exposure variables after the recoding were “0 = not at all; 1 = less than once a week; and 2 = at least once a week”. We based on literature to code and categorize the explanatory variables . Covariates The covariates included in this study were selected based on their significant association with the outcome variables as well as their availability in the DHS dataset . The variables were sectioned into individual-level factors (maternal age, educational level, religion, current working status, parity, health insurance coverage, marital status, getting medical help for self: Permission to go, getting medical help for self: distance to health facility, and getting medical help for self: getting money for treatment) and contextual factors (wealth index, place of residence, and geographical subregions). We maintained the coding for maternal age, educational level, current working status, health insurance coverage, getting medical help for self: Permission to go, getting medical help for self: distance to health facility, and getting money for treatment, wealth index, and place of residence as found in the DHS dataset. Marital status was recoded into 0 = never married; 1 = married; 2 = cohabiting; 3 = widowed; 4 = divorced; and 5 = separated. Religion was coded as 0 = Christianity; 1 = Islamic; 2 = African Traditional; 3 = No religion; and 4 = others. Parity was recoded into 0 = one birth; 1 = two births; 2 = three births; and 3 = four or more births. The 28 countries used in this study were grouped into their geographical subregions and were coded as 0 = Southern Africa; 1 = Central Africa; 2 = Eastern Africa; and 3 = Western Africa. Three maternal health care service utilization variables (ANC, SBA, and PNC) were the outcome variables in this study. With ANC, the women were asked the number of antenatal visits they had during the recent pregnancy. The response was continuous and was recoded into ‘No (0–3 = 0)’ and ‘Yes (4 or more = 1)’. For SBA, the women who had assistance during delivery from qualified categories of health professionals were coded as having ‘assisted delivery = 1’ whilst the remaining women were grouped as ‘not having assisted delivery = 0’. Regarding PNC, the women were asked whether they had a baby postnatal check within 2 months after delivery. The response categories were ‘No’, ‘Yes’, and ‘Don’t know’. Those who responded ‘don’t know’ were dropped. We utilized the remaining responses ‘No = 0’ and ‘Yes = 1’ in the analysis. The response coding in this study was informed by previous studies . Frequency of listening to radio, frequency of watching television, and frequency of reading newspapers or magazines were the key explanatory variables. All three variables had the same response options. The options were ‘not at all’, ‘less than once a week’, ‘at least once a week’, and ‘almost every day’. For this study’s purpose, those that responded, ‘at least once a week’ and ‘almost every day’ were merged and recoded as “at least once a week” and used in the study. The final response categories used in each of the three exposure variables after the recoding were “0 = not at all; 1 = less than once a week; and 2 = at least once a week”. We based on literature to code and categorize the explanatory variables . The covariates included in this study were selected based on their significant association with the outcome variables as well as their availability in the DHS dataset . The variables were sectioned into individual-level factors (maternal age, educational level, religion, current working status, parity, health insurance coverage, marital status, getting medical help for self: Permission to go, getting medical help for self: distance to health facility, and getting medical help for self: getting money for treatment) and contextual factors (wealth index, place of residence, and geographical subregions). We maintained the coding for maternal age, educational level, current working status, health insurance coverage, getting medical help for self: Permission to go, getting medical help for self: distance to health facility, and getting money for treatment, wealth index, and place of residence as found in the DHS dataset. Marital status was recoded into 0 = never married; 1 = married; 2 = cohabiting; 3 = widowed; 4 = divorced; and 5 = separated. Religion was coded as 0 = Christianity; 1 = Islamic; 2 = African Traditional; 3 = No religion; and 4 = others. Parity was recoded into 0 = one birth; 1 = two births; 2 = three births; and 3 = four or more births. The 28 countries used in this study were grouped into their geographical subregions and were coded as 0 = Southern Africa; 1 = Central Africa; 2 = Eastern Africa; and 3 = Western Africa. We first extracted the data from the individual women’s files in the 28 countries and appended it for analysis. The data was cleaned, and all missing observations were dropped. Only the countries with the completed cases of variables of interest were included in the final analysis. First, percentages were used to present the results of the utilisation of the ANC, SBA, and PNC using a forest plot (Figs – ). We performed crosstabulation to determine the distribution of the outcome variables across the exposure variables and the covariates. Pearson’s chi-square test of independence was employed to determine the significant variables using the p-value ( p < 0.05). We employed the ‘best subset variable selection method’ to obtain the variables for the regression analysis. According to Lindsey and Sheather , the best subset variable selection method when performed enables the researcher with the best combinations of predictors for each level of model complexity. To perform this, we used the Stata command ‘gvselect’ together with all the covariates to determine which set of covariates to include in the regression model. The output of the best selection methods included log-likelihood, and Akaike’s information criterion (AIC). We selected the set of variables with the lowest AIC for this study. To determine the influence of different types of mass media variables on ANC, SBA, and PNC, a multilevel logistic regression was adopted and modelled in three steps. Model 0, I, and II were fitted to include the outcome variable, key explanatory variables only, and key explanatory variables and covariates from the best selection method respectively. The rest of the AIC was used to test for the model fitness and comparison. Adjusted odds ratio (aOR) with their 95% confidence intervals (CIs) were used to present the results of the regression analysis in a tabular form. Furthermore, the intraclass correlation coefficient, and the variance component is reported. The women’s sample weight (v005/1,000,000) was applied in all analyses to alleviate biased estimates based on the DHS guidelines. Also, we used the survey set ‘svy’ command in Stata to adjust for the complex sampling technique employed by the DHS in all the analysis. Statistical significance was set at p-value less than 0.05. Stata software version 16.0 was used to perform the analysis. The study required no ethical clearance because the DHS dataset is freely available in the public domain. Prior permission to use the dataset was sought from the MEASUREDHS. We also adhered to ethical guidelines in the use of secondary dataset for publication. Detailed information about the DHS data usage and ethical standards are available at http://goo.gl/ny8T6X . Results Prevalence of maternal health care service utilization among women in sub-Saharan Africa Figs – outline the prevalence of maternal health care utilization among women in SSA. The study found that the prevalence of ANC, SBA and PNC utilization in SSA was 61.33% (95% CI: 54.54–68.13), 73.35% (95% CI = 67.39–79.30) and 45.21% (95% CI = 35.53–54.88), respectively. The lowest and highest prevalence of ANC utilization was recorded in Ethiopia (31.99%, [95% CI = 30.95–33.03]) and Sierra Leone (90.72%, [95% CI = 90.06–91.32]), respectively . Also, while Ethiopia recorded the least (31.08% [95% CI = 30.04–32.12) prevalence of SBA utilization, Congo had the highest (93.43%, [95% CI = 92.80–94.06]) . For PNC utilization, the prevalence ranged from (8.33%, [95% CI = 7.71–8.95]) in Ethiopia to (84.22%, [95% CI = 83.21–85.23]) in Zimbabwe . Association between explanatory variables and maternal health care service utilization provides a detailed outline of the association between explanatory variables and the outcome variable. Exposure to mass media, maternal age (years), maternal educational level, marital status, religion, maternal current working status, parity, getting medical help for self, health insurance coverage, wealth index, and residence were significantly associated with ANC, all at p < 0.001. Also, at p < 0.001, exposure to mass media, maternal age (years), maternal educational level, marital status, religion, maternal current working status, parity, getting medical help for self, health insurance coverage, wealth index, and residence were significantly associated with SBA. Further, exposure to mass media, maternal age (years), maternal educational level, marital status, religion, parity, getting medical help for self, health insurance coverage, wealth index and residence were significantly associated with PNC, all at p < 0.001. Fixed and random effect results of the association between mass media exposure and maternal health care service utilization (ANC, SBA & PNC) shows the results of the multilevel mixed effect model analysis of the association between mass media exposure and ANC. Women who listened to radio at least once every week (aOR = 1.11, 95% CI = 1.07,1.15) were more likely to attend ANC as compared to those who did not listen to radio at all. Also, women who watched television at least once a week (aOR = 1.39, 95% CI = 1.33,1.46) were more likely to attend ANC as compared to those who did not watch television at all. presents the results of the multilevel mixed effect model analysis of the association between mass media exposure and SBA utilization. Women who read newspaper/magazine at least once a week (aOR = 1.27, 95% CI = 1.14,1.41); listened to radio at least once a week (aOR = 1.12, 95% CI = 1.07,1.17); and watched television at least once a week (aOR = 1.32, 95% CI = 1.24,1.40), were more likely to utilize SBA than those who did not read newspaper/magazine; listen to radio; and watch television at all. outlines the results of the multilevel mixed effect model analysis of the association between mass media exposure and PNC visits. The study found that women who read newspaper/magazine at least once a week (aOR = 1.35, 95% CI = 1.27,1.45); listened to radio at least once a week (aOR = 1.37, 95% CI = 1.32,1.42); and watched television at least once a week (aOR = 1.39, 95% CI = 1.32,1.47) were more likely to utilize PNC compared to those who did not. Prevalence of maternal health care service utilization among women in sub-Saharan Africa Figs – outline the prevalence of maternal health care utilization among women in SSA. The study found that the prevalence of ANC, SBA and PNC utilization in SSA was 61.33% (95% CI: 54.54–68.13), 73.35% (95% CI = 67.39–79.30) and 45.21% (95% CI = 35.53–54.88), respectively. The lowest and highest prevalence of ANC utilization was recorded in Ethiopia (31.99%, [95% CI = 30.95–33.03]) and Sierra Leone (90.72%, [95% CI = 90.06–91.32]), respectively . Also, while Ethiopia recorded the least (31.08% [95% CI = 30.04–32.12) prevalence of SBA utilization, Congo had the highest (93.43%, [95% CI = 92.80–94.06]) . For PNC utilization, the prevalence ranged from (8.33%, [95% CI = 7.71–8.95]) in Ethiopia to (84.22%, [95% CI = 83.21–85.23]) in Zimbabwe . Association between explanatory variables and maternal health care service utilization provides a detailed outline of the association between explanatory variables and the outcome variable. Exposure to mass media, maternal age (years), maternal educational level, marital status, religion, maternal current working status, parity, getting medical help for self, health insurance coverage, wealth index, and residence were significantly associated with ANC, all at p < 0.001. Also, at p < 0.001, exposure to mass media, maternal age (years), maternal educational level, marital status, religion, maternal current working status, parity, getting medical help for self, health insurance coverage, wealth index, and residence were significantly associated with SBA. Further, exposure to mass media, maternal age (years), maternal educational level, marital status, religion, parity, getting medical help for self, health insurance coverage, wealth index and residence were significantly associated with PNC, all at p < 0.001. Fixed and random effect results of the association between mass media exposure and maternal health care service utilization (ANC, SBA & PNC) shows the results of the multilevel mixed effect model analysis of the association between mass media exposure and ANC. Women who listened to radio at least once every week (aOR = 1.11, 95% CI = 1.07,1.15) were more likely to attend ANC as compared to those who did not listen to radio at all. Also, women who watched television at least once a week (aOR = 1.39, 95% CI = 1.33,1.46) were more likely to attend ANC as compared to those who did not watch television at all. presents the results of the multilevel mixed effect model analysis of the association between mass media exposure and SBA utilization. Women who read newspaper/magazine at least once a week (aOR = 1.27, 95% CI = 1.14,1.41); listened to radio at least once a week (aOR = 1.12, 95% CI = 1.07,1.17); and watched television at least once a week (aOR = 1.32, 95% CI = 1.24,1.40), were more likely to utilize SBA than those who did not read newspaper/magazine; listen to radio; and watch television at all. outlines the results of the multilevel mixed effect model analysis of the association between mass media exposure and PNC visits. The study found that women who read newspaper/magazine at least once a week (aOR = 1.35, 95% CI = 1.27,1.45); listened to radio at least once a week (aOR = 1.37, 95% CI = 1.32,1.42); and watched television at least once a week (aOR = 1.39, 95% CI = 1.32,1.47) were more likely to utilize PNC compared to those who did not. Figs – outline the prevalence of maternal health care utilization among women in SSA. The study found that the prevalence of ANC, SBA and PNC utilization in SSA was 61.33% (95% CI: 54.54–68.13), 73.35% (95% CI = 67.39–79.30) and 45.21% (95% CI = 35.53–54.88), respectively. The lowest and highest prevalence of ANC utilization was recorded in Ethiopia (31.99%, [95% CI = 30.95–33.03]) and Sierra Leone (90.72%, [95% CI = 90.06–91.32]), respectively . Also, while Ethiopia recorded the least (31.08% [95% CI = 30.04–32.12) prevalence of SBA utilization, Congo had the highest (93.43%, [95% CI = 92.80–94.06]) . For PNC utilization, the prevalence ranged from (8.33%, [95% CI = 7.71–8.95]) in Ethiopia to (84.22%, [95% CI = 83.21–85.23]) in Zimbabwe . provides a detailed outline of the association between explanatory variables and the outcome variable. Exposure to mass media, maternal age (years), maternal educational level, marital status, religion, maternal current working status, parity, getting medical help for self, health insurance coverage, wealth index, and residence were significantly associated with ANC, all at p < 0.001. Also, at p < 0.001, exposure to mass media, maternal age (years), maternal educational level, marital status, religion, maternal current working status, parity, getting medical help for self, health insurance coverage, wealth index, and residence were significantly associated with SBA. Further, exposure to mass media, maternal age (years), maternal educational level, marital status, religion, parity, getting medical help for self, health insurance coverage, wealth index and residence were significantly associated with PNC, all at p < 0.001. shows the results of the multilevel mixed effect model analysis of the association between mass media exposure and ANC. Women who listened to radio at least once every week (aOR = 1.11, 95% CI = 1.07,1.15) were more likely to attend ANC as compared to those who did not listen to radio at all. Also, women who watched television at least once a week (aOR = 1.39, 95% CI = 1.33,1.46) were more likely to attend ANC as compared to those who did not watch television at all. presents the results of the multilevel mixed effect model analysis of the association between mass media exposure and SBA utilization. Women who read newspaper/magazine at least once a week (aOR = 1.27, 95% CI = 1.14,1.41); listened to radio at least once a week (aOR = 1.12, 95% CI = 1.07,1.17); and watched television at least once a week (aOR = 1.32, 95% CI = 1.24,1.40), were more likely to utilize SBA than those who did not read newspaper/magazine; listen to radio; and watch television at all. outlines the results of the multilevel mixed effect model analysis of the association between mass media exposure and PNC visits. The study found that women who read newspaper/magazine at least once a week (aOR = 1.35, 95% CI = 1.27,1.45); listened to radio at least once a week (aOR = 1.37, 95% CI = 1.32,1.42); and watched television at least once a week (aOR = 1.39, 95% CI = 1.32,1.47) were more likely to utilize PNC compared to those who did not. The study examined the association between frequency of mass media exposure and maternal health services utilization among women in SSA. The study found that the prevalence of ANC, SBA, and PNC utilization was 58.5%, 71.6%, and 40.7%, respectively. Variations in the prevalence of ANC, SBA and PNC utilization between countries were observed. The prevalence of ANC utilization was lowest in Ethiopia (32.0%) and highest in Sierra Leone (90.7%). Sierra Leone had a Free Health Care Initiative (FHCI) strategy, its effectiveness may lead to a significant increase in ANC service utilization among women . In Ethiopia however, health extension workers have been trained to provide maternal health care including antenatal care to women but the progress of this initiative seem to be thwarted probably because these extension workers are not listed under skilled providers . Also, while Ethiopia recorded the lowest (31.1%) prevalence of SBA utilization, Congo had the highest (93.4%). This finding in the case of Ethiopia could be a manifestation of the belief that SBA utilization is less salient and less considered by Ethiopian women . PNC utilization prevalence ranged from 8.3% in Ethiopia to 84.2% in Zimbabwe. This finding may be as a result of some Ethiopian women practicing seclusion after delivery making them less likely to utilize PNC services . For all the maternal health service utilization indicators, Ethiopia had the least prevalence, therefore, health policymakers in Ethiopia should take some insights from some of the countries that are doing well in this regard such as, Congo, Zimbabwe, and Sierra Leone. The suggestion also works for other SSA countries with relatively lower ANC utilization such as Burkina Faso (33.6%), Chad (34.1%), Guinea (37.8%), Cote d’Ivoire (45.0%), Mali (45.6%), DR Congo (48.7%), and Burundi (49.3%). Similar to the observation of previous studies , this study found that women who listened to the radio at least once every week were more likely to have ANC visitations as against those who did not listen to the radio at all. A plausible account for this finding could be attributed to the substantial improvement in women’s awareness and the need to consider ANC uptake even if they intend having a home-based delivery . Therefore, more maternal health service utilization programs targeted at radio listeners should be designed and implemented to help increase ANC uptake among women. Also, women who watched television at least once a week were more likely to have ANC visitations compared to those who did not. Other studies had similar findings. Women who watch television may have frequently been educated about the need to visit the health facility for ANC both for their health and that of the unborn child making them more likely to access ANC . It could also be that the consequences of not having ANC visitations as experienced by other women that are shown on televisions may reduce women’s desire to neglect ANC visitations . This finding underscores the need to increase the broadcasting of television-based maternal and health care utilization programs at regular times. For instance, more “tele-nurses” could be used to educate women on maternal health service utilization on television stations. Women who read newspaper/magazine, listened to radio, and watched television at least once a week were more likely to utilize SBA than those who were not exposed to such media sources at all. This finding corroborates previous studies . In recent times, newspapers, radio, and television are media outlets through which important health information is transmitted to women. In this light, women who utilize such media are easily accessible to information that would help them make informed decisions about their health, increasing their propensity to utilize maternal health services including SBA use . Also, there is the likelihood that women who are exposed to mass media (radio, newspapers/magazines, and television) will have a positive attitude towards the use of maternal health services such as SBA as a result of what they have heard, watched, or read . Similar to findings of some previous investigations , the study found that women who read newspaper/magazine, listened to radio, and watched television at least once a week were more likely to utilize PNC compared to those who were not exposed to such media sources at all. Reasonably, women who are exposed to mass media have a better knowledge of PNC services that certainly increase their likelihood of PNC uptake . Women who are exposed to mass media (especially newspaper/magazine, radio and television) may have better behavioral intentions and or desire to utilize PNC compared to their counterparts who are not . Strengths and limitations Nationally representative data among SSA countries were employed to assess mass media exposure and maternal healthcare services utilization in SSA. The study has offered insights into the importance of mass media on maternal healthcare services utilization. The wide coverage and rigor of the analytical procedure have enhanced the prospects of generalizing the findings to other contexts where maternal healthcare services utilization can be attained. However, due to the cross-sectional nature of the study design, causal inference cannot be drawn from current outcomes. The relationships established between the explanatory and outcome variables may vary over time. Recall bias, which is an intrinsic nature of cross-sectional data may lead to under-reporting of the events studied. Nationally representative data among SSA countries were employed to assess mass media exposure and maternal healthcare services utilization in SSA. The study has offered insights into the importance of mass media on maternal healthcare services utilization. The wide coverage and rigor of the analytical procedure have enhanced the prospects of generalizing the findings to other contexts where maternal healthcare services utilization can be attained. However, due to the cross-sectional nature of the study design, causal inference cannot be drawn from current outcomes. The relationships established between the explanatory and outcome variables may vary over time. Recall bias, which is an intrinsic nature of cross-sectional data may lead to under-reporting of the events studied. The study identified a strong positive predictive relationship between mass media exposure and health services utilization. The study observed that exposure to radio and television were positively associated with ANC visitations. Moreover, exposure to mass media (newspaper/magazine, radio and television) were positively associated with SBA and PNC utilization. We, therefore, recommend that health policymakers and other non-governmental organizations should continuously invest resources in the design and implementation of maternal health service utilization educational programmes via all the mass media sources to scale up women’s maternal health service utilization uptake in SSA. |
SEOM-GG clinical guidelines for the management of germ-cell testicular cancer (2023) | 7fc3b4b6-5588-4702-8913-35b370e82e43 | 11467073 | Internal Medicine[mh] | Testicular cancer is the most frequent tumor in adolescent and young adult males aged between 15 and 39 years . The incidence of testicular germ cell tumors (TGCT) is increasing with more than 74,000 new cases globally per year. The higher incidence is observed in Europe, accounting for almost one third of the cases worldwide. The estimated incidence rate per 100,000 habitants in Europe ranges from 5.0 in Spain to 11.8 in Norway. The 5-year survival of testicular cancer is 96%, including 99.2% for localized tumors, 96% for regional lymph node disease and 73.4% for patients with distant metastases , Consequently, all newly diagnosed patients should be treated with curative intent, and therapeutic strategies should minimize acute and long-term side effects. This guideline is based on a systematic review of relevant published studies and with the consensus of ten treatment expert oncologists from the Spanish Society of Medical Oncology (SEOM) and the Spanish Germ Cell Cancer Group (GG). The Infectious Diseases Society of America-US Public Health Service Grading System for Ranking Recommendations in Clinical Guidelines has been used to assign levels of evidence and grades of recommendation . Table summarizes the main recommendations from SEOM-GG. TGCT should be suspected in any male with a solid, painless testicular nodule. A history of cryptorchidism or atrophic testes may be present. Approximately 10% of patients have symptoms of metastatic disease such as lumbar back pain, lower extremity swelling, dyspnea, cough, neck mass enlargement, gynecomastia or paraneoplastic hyperthyroidism. If suspected, diagnosis should be started immediately as any delay in diagnosis may adversely affect tumor stage and prognosis . After a complete physical examination, bilateral high-frequency ultrasound of the testis is required to confirm the presence of a testicular mass and to examine the contralateral testis. The presence of microlithiasis as a single finding is not diagnostic. Other mandatory investigations include a complete blood count and chemistry profile, including pre- and post-orchiectomy STM such as alpha-fetoprotein (AFP), beta-subunit of human chorionic gonadotropin (bHCG) and lactate dehydrogenase (LDH), and a thoraco-abdominopelvic CT scan. Regional metastases first appear in the retroperitoneal lymph nodes, although false-negative results with occult micrometastases may be present in up to 25% of clinical stage I disease. Brain imaging is recommended in patients with extensive pulmonary metastases (i.e., >5 pulmonary nodules), poor IGCCCG risk, very high bHCG levels (i.e., >5000 mIU/ml) or when clinically indicated . Bone scan and/or spinal MRI should be performed if clinical symptoms are present. There is no evidence to support the use of fluorodeoxyglucose PET (FDG-PET) in the staging of testicular cancer . Radical inguinal orchiectomy with ligation of the spermatic cord at the internal inguinal ring is mandatory to facilitate histopathological and prognostic evaluation of the primary tumor and to provide adequate oncologic control . However, in patients with elevated tumor markers and high burden or life-threatening metastatic disease requiring urgent treatment, chemotherapy may be started immediately and orchiectomy delayed until clinical stabilization. Partial or trans-scrotal biopsy or orchiectomy, are not recommended as they alters the lymphatic drainage to inguinal nodes (“scrotal violation”) . The role of routine contralateral testicular biopsy to exclude germ cell neoplasia in situ (GCNIS, up to 9%), may be discussed in patients with high risk of contralateral GCNIS (i.e., history of cryptorchism and/or testicular volume <12 ml) . Partial orchiectomy for fertility preservation in patients with contralateral tumor (<5%) remain controversial . A pathologic evaluation of the entire testis should be performed instead of a simple biopsy to determine the histopathological subtype according to the latest WHO 2022 histologic classification (Table ) and the local extent of the disease. Sex cord-stromal tumors of the testis are excluded from this guideline. In practice, germ-cell testicular tumors are classified as seminoma and non-seminomatous germ cell tumors (NSGCT), which include mixed germ cell tumors (GCT). In addition, the presence of in situ neoplasia, vascular or lymphatic or rete testicular invasion, and extension beyond the tunica albuginea or into the spermatic cord are important information for further management and prognosis. Tumor markers (AFP, bHCG, and LDH) should be performed before surgery as they support diagnosis of TGCT and may be indicative of subtype. However, they have low sensitivity and normal values do not exclude TGCT. AFP and/or bHCG are elevated in about 85% of NSGCTs, even in localized tumors. By contrast, serum bHCG is elevated in less than 20% of testicular seminomas, and AFP is not elevated in pure seminomas where an increase of AFP indicates a non-seminoma component. Serum tumor markers should be closely monitored after orchiectomy. A progressive decline to normalization according to their half-lives (5–7 days for AFP and 1–3 days for bHCG) confirms that orchiectomy has removed all tumor disease, otherwise they provide early evidence of residual disease or recurrence. TGCT are staged using the eighth (2016) tumor, node, metastasis (TNM) staging system developed jointly by the American Joint Committee on Cancer and the Union for International Cancer Control, based on imaging and STM after orchiectomy (Table ) . Localised tumor includes T1-4N0M0S0, all others should be considered as disseminated disease, including those patients without radiologic evidence of metastasis whose tumor markers do not normalize after orchiectomy. Advanced stages (IS-III) are further classified according to the IGCCCG prognostic model (Table ). Infertility or impaired spermatogenesis is common in patients with testicular cancer before the start of treatment , but can be exacerbated by orchiectomy, cisplatin-based chemotherapy or radiotherapy. Approximately 70% of patients will recover spermatogenesis, depending on age, type of treatment and severity of previous oligospermia. Information and counselling on fertility issues and sperm cryopreservation should be offered routinely prior to the initiation of any form of treatment, ideally prior to orchiectomy. Stage I seminoma Approximately 80% of patients with seminoma present with stage I disease, which is associated with a long-term survival rate of 99%. Recurrences on surveillance are uncommon (15–20%), occur in the first 14–18 months, mainly in the retroperitoneum, and are highly curable with cisplatin-based chemotherapy . Tumor size (TS), considered as a continuous variable, stromal rete testis invasion (RTI) and lymphovascular invasion (LVI) are the main predictive factors for relapse on surveillance. Relapse-free survival in patients with TS ≤ 5 cm without RTI or LVI or TS ≤ 2 cm with either RTI or LVI is 89–94%, in contrast to 34–73% in those with TS > 5 cm and both RTI and LVI, and 76–84% in the remaining patients . Therapeutic options after orchiectomy should be discussed with the patient. Active surveillance is the preferred strategy for most patients, but adjuvant chemotherapy with a single course of carboplatin (area under the curve of 7) is an alternative, especially for those with more than one risk factor or for those unwilling or unable to undergo surveillance . Some non-randomised studies suggest that two cycles of carboplatin may be associated with a lower risk of relapse, but there is limited data on the long-term toxicities of carboplatin . Due to the increased risk of second malignancies, low-dose reduced paraaortic adjuvant radiotherapy should only be recommended if chemotherapy is contraindicated . Stage I NSGCT Approximately two thirds of patients with NSGCT are diagnosed with stage I disease. Orchiectomy alone cures about 75% of these patients. The remainder will relapse, usually within the first 2 years after surgery, the majority as good risk advanced disease. The presence of LVI in the primary tumor defines a subgroup with a high risk of relapse, approaching 50% (as opposed to 15% in the remaining patients). Predominance of embryonal carcinoma is also associated with an increased recurrence rate. The expected relapse rates are 25%, 41% and 77%, respectively, when none, one or both of these factors are present . The 5-year disease-specific survival of patients with stage I NSGCT is close to 100%, regardless of the postoperative strategy. Active surveillance of all patients provides an excellent cure rate, avoiding unnecessary therapy and potential long-term toxicity in many patients. Alternatively, a risk-adapted approach, i.e., the administration of adjuvant chemotherapy to high-risk patients, allows for less intensive follow-up, reducing the associated stress and disruption of life and reducing the need for post-chemotherapy retroperitoneal lymphadenectomy in the event of recurrence. Based on a prospective non-randomized study, one cycle of standard BEP chemotherapy (Table ) reduces the risk of relapse to less than 5% in patients with LVI and is the most commonly recommended adjuvant treatment for patients with LVI . Retroperitoneal lymphadenectomy is reserved for selected patients with LVI, contraindications to adjuvant BEP and doubtful ipsilateral lymph nodes on CT scan . Approximately 80% of patients with seminoma present with stage I disease, which is associated with a long-term survival rate of 99%. Recurrences on surveillance are uncommon (15–20%), occur in the first 14–18 months, mainly in the retroperitoneum, and are highly curable with cisplatin-based chemotherapy . Tumor size (TS), considered as a continuous variable, stromal rete testis invasion (RTI) and lymphovascular invasion (LVI) are the main predictive factors for relapse on surveillance. Relapse-free survival in patients with TS ≤ 5 cm without RTI or LVI or TS ≤ 2 cm with either RTI or LVI is 89–94%, in contrast to 34–73% in those with TS > 5 cm and both RTI and LVI, and 76–84% in the remaining patients . Therapeutic options after orchiectomy should be discussed with the patient. Active surveillance is the preferred strategy for most patients, but adjuvant chemotherapy with a single course of carboplatin (area under the curve of 7) is an alternative, especially for those with more than one risk factor or for those unwilling or unable to undergo surveillance . Some non-randomised studies suggest that two cycles of carboplatin may be associated with a lower risk of relapse, but there is limited data on the long-term toxicities of carboplatin . Due to the increased risk of second malignancies, low-dose reduced paraaortic adjuvant radiotherapy should only be recommended if chemotherapy is contraindicated . Approximately two thirds of patients with NSGCT are diagnosed with stage I disease. Orchiectomy alone cures about 75% of these patients. The remainder will relapse, usually within the first 2 years after surgery, the majority as good risk advanced disease. The presence of LVI in the primary tumor defines a subgroup with a high risk of relapse, approaching 50% (as opposed to 15% in the remaining patients). Predominance of embryonal carcinoma is also associated with an increased recurrence rate. The expected relapse rates are 25%, 41% and 77%, respectively, when none, one or both of these factors are present . The 5-year disease-specific survival of patients with stage I NSGCT is close to 100%, regardless of the postoperative strategy. Active surveillance of all patients provides an excellent cure rate, avoiding unnecessary therapy and potential long-term toxicity in many patients. Alternatively, a risk-adapted approach, i.e., the administration of adjuvant chemotherapy to high-risk patients, allows for less intensive follow-up, reducing the associated stress and disruption of life and reducing the need for post-chemotherapy retroperitoneal lymphadenectomy in the event of recurrence. Based on a prospective non-randomized study, one cycle of standard BEP chemotherapy (Table ) reduces the risk of relapse to less than 5% in patients with LVI and is the most commonly recommended adjuvant treatment for patients with LVI . Retroperitoneal lymphadenectomy is reserved for selected patients with LVI, contraindications to adjuvant BEP and doubtful ipsilateral lymph nodes on CT scan . General recommendations A validated prognostic model for advanced disease was developed by the International Germ Cell Cancer Collaborative Group (IGCCCG). Patients with advanced disease (stages IS, II and III) were classified into good, intermediate, and poor risk groups for both progression-free and overall survival, based upon histology (seminoma vs. nonseminoma), primary site of the tumor, metastatic sites, and STM levels (Table ) . This classification remains valid and is the basis for selecting appropriate treatment, although it has recently been updated to include modern treatments and longer follow-up . For patients with disseminated seminoma, the expected 5-year OS is 95% and 88% for good and intermediate prognosis, respectively, although the prognosis of those with good prognosis with LDH > 2.5 × ULN is very similar to that of those with intermediate risk. For patients with advanced NSGCT, the 5-year OS is 96%, 89% and 67% for good, intermediate and poor prognosis respectively. However, in the latest update, a more refined prognostic model was developed and validated, including LDH > 2.5 × ULN, age and the presence of pulmonary metastases as additional adverse prognostic factors. Advanced disease includes stages IS to III. Cisplatinum-based chemotherapy is the cornerstone of systemic treatment for germ cell cancer (Table ). Bleomycin, etoposide and cisplatin (BEP) is the standard of care. Patients with intermediate- and poor-risk IGCCCG should be treated with four cycles of BEP, whereas patients with good-risk IGCCCG can be safely treated with three cycles of BEP . An absolute or relative contraindication to bleomycin may exist in patients over 40 years of age, those with pulmonary disease, heavy smokers, athletes or professionals who require a high lung capacity, and those with mediastinal tumors or lung metastases, especially if extensive pulmonary resection or radiation is planned after chemotherapy . Baseline and follow-up spirometry and diffusing capacity for carbon monoxide (DLCO) may identify these patients as ineligible for bleomycin, as well as early toxicity during treatment. Bleomycin toxicity should be suspected in any patient with sudden onset of cough or dyspnea. A decrease in corrected DLCO is a predictor of bleomycin-induced pneumonitis. We recommend discontinuing bleomycin if a decrease in DLCO greater than 25% is observed . If there is a contraindication to bleomycin in patients with IGCCCG good prognosis tumors, four cycles of EP may be used as an alternative, although slightly statistically non-significant worse results have been reported in two randomised trials in NSGCT . For patients with advanced IGCCCG intermediate or poor prognosis tumors, the alternative first line schedule is VIP (Table ) plus prophylactic G-CSF . Combinations of carboplatin with etoposide (EC) in patients with good prognosis or with bleomycin and etoposide in NSGCT (BEC) are inferior to the same combinations with cisplatin . Radiotherapy (30 Gy) on the retroperitoneal ipsilateral and iliac lymph nodes could also be an alternative for selected stage IIA and IIB patients with seminoma who refuse or have contraindication for chemotherapy . BEP is generally well tolerated, especially in patients with a good prognosis, although many patients may experience myelosuppression (especially neutropenia), fatigue, alopecia and, in some, nausea, peripheral neuropathy, tinnitus or hearing loss, and even renal and pulmonary toxicity. The oncologist should aim to administer the full dose at the scheduled time, avoiding delays and dose reductions as much as possible, as lower dose intensity is associated with worse outcomes . We recommend prophylactic G-CSF to achieve these goals. Because of the low haematological toxicity of bleomycin, it can generally be given on days 8 and 15 of each cycle, even if the blood-cell count is low, although the dose should be adjusted in patients with a creatinine clearance <50 ml/min and discontinued in the event of pulmonary toxicity. In any case, the total cumulative dose should not exceed 360–400 UI. A dose reduction of etoposide or ifosfamide should be considered in the event of prolonged febrile neutropenia, incomplete blood-cell recovery, bleeding or G4 hematologic toxicity in the previous cycle . Tumor marker decrease should be monitored before each cycle. The Spanish Germ Cell Cancer Group Registry has a serum tumor marker calculator available to all members ( www.grupogerminal.es ). Tumor marker decline is the only confirmed prospective predictor of response to chemotherapy in patients with metastatic germ cell cancer. Patients with inadequate decline after the first or second cycle represent a group with a poorer prognosis. The GETUG-13 trial showed that patients with a favorable tumor marker response after one cycle of BEP are likely to be cured in more than 80% of cases if BEP is continued . Patients with inadequate tumor marker response represent an unmet medical need where close monitoring and early salvage strategies should be considered . Importantly, these guidelines recommend that patients with TGCT at high risk of recurrence, as well as those who have relapsed, be treated by multidisciplinary teams in experienced centers . Special situations Extragonadal germ cell tumors Extragonadal GCT are rare neoplasms (1–5% of all GCTs) that originate in midline locations such as mediastinum or retroperitoneum, probably from primordial germ cells that fail to migrate to the gonadal ridges during embryonal development . Sacrococcygeal and intracranial GCTs, most common in children and adolescents, are not covered in this guide. Histologic diagnosis and an accurate differential diagnosis with other histologies, such as thymic carcinomas and lymphomas is encouraged. Retroperitoneal GCTs have a similar clinical presentation, prognosis and treatment as disseminated testicular tumors, although they are usually bulky at diagnosis because they are oligosymptomatic in their early stages. Treatment is based on systemic cisplatin-based chemotherapy following the recommendations above for each of the IGCCCG subgroups, as well as the management of residual disease described below . Primary mediastinal GCT have different molecular and clinical features compared to TGCT. Although the prognosis depends on the extent, it appears to be similar to that of TGCT for seminomas and worse for NSGCT. The treatment of mediastinal tumors generally requires a multimodality approach. Chemotherapy is usually given first followed by surgery to remove any residual masses, although the optimal order of these therapies has not been established. Chemotherapy BEP, EP or VIP should be chosen according to the above general recommendations for each IGCCCG prognostic subgroup, balancing the ability to control disease while minimizing the risk of bleomycin toxicity, taking into account the possibility of future mediastinal surgery and the potential need for partial lung resection. Most patients with mediastinal NSGCT have residual masses at the end of chemotherapy. Removal of all residual masses after chemotherapy plays an important role in the treatment of these tumors and should be performed whenever technically possible . For mediastinal seminomas, radiotherapy may be an alternative in patients with contraindications to surgery. Management of post-chemotherapy residual disease Decisions on residual masses after completion of chemotherapy should be made based on the initial histology, location of the residual lesions, and the evolution of tumor markers. Patients with NSGCT, post-chemotherapy negative tumor markers and residual retroperitoneal lymph nodes ≥1 cm in larger axial diameter should undergo surgery, preferentially an open nerve-sparing retroperitoneal lymph node dissection (RPLND). In large residual masses, a full bilateral RPLND is recommended, whereas a modified template RPLND can be considered in cases of low volume before and after CT . FDG-PET-CT is not recommended for the evaluation of residual disease in NSGCT. Pathologic examination of RPLND following chemotherapy demonstrate necrosis in 50% of cases, mature teratoma in 35%, and viable tumor in 15%. Persistent intrathoracic masses as well in other locations should be resected if technically feasible . Although the timing for metastasectomy is not well established, the retroperitoneum is commonly selected as the initial site for resection due to its higher frequency of residual disease. However, pathologic discrepancy between retroperitoneal lymph node and thoracic residual masses is about 30%. Pathologic concordance between the two lungs is greater than 90% . Thus, patients with necrosis in both retroperitoneum and in one side of the lung can avoid contralateral lung surgery . In contrast, active surveillance is recommended for patients with disseminated seminoma and post-chemotherapy residual disease with a larger diameter less than 3 cm. In the rest of patients, a FDG PET/CT should be done at least 6 weeks after the last dose of bleomycin. In case of negative FDG-PET, we recommend active surveillance due to its high negative predictive value (>90%). In case of indeterminate results, we recommend repeating a new PET/TC 8–12 weeks later, due to its limited positive predictive value. If FDG-PET is unequivocally positive, we recommend resection of the residual mass, but due to the limited positive predictive value of FDG-PET, and the difficulty and morbidity of resection of residual masses in seminoma, which often have an associated desmoplastic reaction, some authors propose a biopsy of the lesion to confirm tumor persistence before making a therapeutic decision. Radiotherapy may be an option if residual disease is confirmed and resection is not feasible. Postoperative chemotherapy after resection of residual disease Despite postoperative treatment has not demonstrated to increase overall survival and is controversial, two additional cycles of chemotherapy (EP, VIP or TIP) are commonly recommended for patients with more than 10% of viable tumor in the residual mass, particularly if they were of intermediate or poor-risk disease and/or they had incomplete resection . Choriocarcinoma syndrome and patients at risk of acute respiratory distress syndrome (ARDS) Choriocarcinoma is a highly vascularized tumor with rapid development of extensive metastasis particularly in the lung, but also in the liver, brain, and other organs. Because bleeding leading to ARDS and other severe complications may even be triggered with the first standard cycle of chemotherapy, an initial dose-reduced induction regimen such as a 2- or 3-day EP or baby-BOP (Cisplatin 50 mg/m 2 , vincristine 2 mg, and bleomycin 30 U on day 1) has been recommended. After 14 days of this regimen once the patient is stabilized the full number of cycles should be applied following the induction cycle. If induction EP was used, the remaining additional days of the EP protocol may be administered at day 15 when clinically feasible, before starting standard BEP. Orchiectomy should be performed in all patients with testicular lesions, but if the patient is not stable at the time of diagnosis, chemotherapy should be started and orchiectomy delayed even until the end of systemic treatment. These induction approaches are also valid for other patients with NSGCT and at high risk of ARDS due to extensive lung metastases, dyspnea or hypoxemia at diagnosis. In cases of extensive tumor volume, prevention measures for tumor lysis syndrome is also necessary. Patients unfit for cisplatin Patients who are definitely unfit for cisplatin-based CT can be treated with carboplatin-based chemotherapy, although results are inferior to BEP . In patients with obstructive uropathy, a nephrostomy before initiating CT should be performed to be able to administer cisplatin. Brain metastases Brain metastases occur in about 10% of patients with advanced disease, either in the context of initial metastatic disease, as a part of a systemic relapse or rarely as an isolated site of relapse. Long-term survival of patients presenting with brain metastases at diagnosis is poor (30–50%) and even poorer when a site of recurrent disease (5-year survival rate is 2–5%) . Brain metastases usually require a multimodal approach, although the optimal sequence should be individualized. The general approach for patients with brain metastases is chemotherapy followed by observation in case of complete response, or surgical excision and/or stereotactic radiosurgery in case of small residual disease. In patients with brain metastases at relapse, consolidation RT should be used, even with total response after chemotherapy. Surgery should be considered in case of a persistent solitary metastases but location of metastases, histology of primary tumor and systemic disease status should be considered. Palliative whole-brain radiation therapy is indicated in multiple unresectable lesions . Prophylaxis of thromboembolic events (TEE) TEE occur more frequently in GCT patients receiving chemotherapy than in patients of the same age receiving chemotherapy for other cancers. Retrospective studies identified increasing stage and size of retroperitoneal lymph nodes, as well as Khorana score and indwelling vascular access devices as TEE risk factors . Data regarding the efficacy of thromboprophylaxis are conflicting but despite lacking level-I evidence, prevention of TEE should be particularly considered in patients at higher risk, such as those with retroperitoneal involvement >3.5 cm, stage III or poor prognosis IGCCCG . In addition, vascular access devices should be avoided whenever possible. Growing teratoma syndrome This is a rare condition associated with NSGCT, characterized by an increase in metastatic mass during or after chemotherapy with normalized STM, caused by a mature teratoma with no malignant component. Treatment consists of surgical resection of the lesions . Teratoma with malignant transformation (TMT) TMT into somatic histologies is a rare but significant complication that occurs in less than 6% of metastatic GCTs. This transformation results in the emergence of a variety of non-germ cell histologies, such as adenocarcinoma, squamous cell carcinoma, sarcoma, and others, which may coexist with the original germ cell tumor. When present at metastatic sites, TMT is associated with a poor prognosis. Somatic type of malignancy, grade, extent of disease, feasibility of radical surgery, number of prior chemotherapy lines of treatment, and the primary tumor site had been also proposed as determinants of long-term outcomes . As these tumors are often resistant to standard platinum-based chemotherapy and radiotherapy, their management remains a challenge for clinicians. The most effective therapeutic approach currently available is complete resection, which often requires aggressive and extensive resection, especially when the disease is confined to solitary sites. Adjuvant chemotherapy as well as systemic treatment when complete resection is not possible, should be individualized and tailored to the transformed histology, particularly in sarcomas and primitive neuroectodermal malignant transformation . A validated prognostic model for advanced disease was developed by the International Germ Cell Cancer Collaborative Group (IGCCCG). Patients with advanced disease (stages IS, II and III) were classified into good, intermediate, and poor risk groups for both progression-free and overall survival, based upon histology (seminoma vs. nonseminoma), primary site of the tumor, metastatic sites, and STM levels (Table ) . This classification remains valid and is the basis for selecting appropriate treatment, although it has recently been updated to include modern treatments and longer follow-up . For patients with disseminated seminoma, the expected 5-year OS is 95% and 88% for good and intermediate prognosis, respectively, although the prognosis of those with good prognosis with LDH > 2.5 × ULN is very similar to that of those with intermediate risk. For patients with advanced NSGCT, the 5-year OS is 96%, 89% and 67% for good, intermediate and poor prognosis respectively. However, in the latest update, a more refined prognostic model was developed and validated, including LDH > 2.5 × ULN, age and the presence of pulmonary metastases as additional adverse prognostic factors. Advanced disease includes stages IS to III. Cisplatinum-based chemotherapy is the cornerstone of systemic treatment for germ cell cancer (Table ). Bleomycin, etoposide and cisplatin (BEP) is the standard of care. Patients with intermediate- and poor-risk IGCCCG should be treated with four cycles of BEP, whereas patients with good-risk IGCCCG can be safely treated with three cycles of BEP . An absolute or relative contraindication to bleomycin may exist in patients over 40 years of age, those with pulmonary disease, heavy smokers, athletes or professionals who require a high lung capacity, and those with mediastinal tumors or lung metastases, especially if extensive pulmonary resection or radiation is planned after chemotherapy . Baseline and follow-up spirometry and diffusing capacity for carbon monoxide (DLCO) may identify these patients as ineligible for bleomycin, as well as early toxicity during treatment. Bleomycin toxicity should be suspected in any patient with sudden onset of cough or dyspnea. A decrease in corrected DLCO is a predictor of bleomycin-induced pneumonitis. We recommend discontinuing bleomycin if a decrease in DLCO greater than 25% is observed . If there is a contraindication to bleomycin in patients with IGCCCG good prognosis tumors, four cycles of EP may be used as an alternative, although slightly statistically non-significant worse results have been reported in two randomised trials in NSGCT . For patients with advanced IGCCCG intermediate or poor prognosis tumors, the alternative first line schedule is VIP (Table ) plus prophylactic G-CSF . Combinations of carboplatin with etoposide (EC) in patients with good prognosis or with bleomycin and etoposide in NSGCT (BEC) are inferior to the same combinations with cisplatin . Radiotherapy (30 Gy) on the retroperitoneal ipsilateral and iliac lymph nodes could also be an alternative for selected stage IIA and IIB patients with seminoma who refuse or have contraindication for chemotherapy . BEP is generally well tolerated, especially in patients with a good prognosis, although many patients may experience myelosuppression (especially neutropenia), fatigue, alopecia and, in some, nausea, peripheral neuropathy, tinnitus or hearing loss, and even renal and pulmonary toxicity. The oncologist should aim to administer the full dose at the scheduled time, avoiding delays and dose reductions as much as possible, as lower dose intensity is associated with worse outcomes . We recommend prophylactic G-CSF to achieve these goals. Because of the low haematological toxicity of bleomycin, it can generally be given on days 8 and 15 of each cycle, even if the blood-cell count is low, although the dose should be adjusted in patients with a creatinine clearance <50 ml/min and discontinued in the event of pulmonary toxicity. In any case, the total cumulative dose should not exceed 360–400 UI. A dose reduction of etoposide or ifosfamide should be considered in the event of prolonged febrile neutropenia, incomplete blood-cell recovery, bleeding or G4 hematologic toxicity in the previous cycle . Tumor marker decrease should be monitored before each cycle. The Spanish Germ Cell Cancer Group Registry has a serum tumor marker calculator available to all members ( www.grupogerminal.es ). Tumor marker decline is the only confirmed prospective predictor of response to chemotherapy in patients with metastatic germ cell cancer. Patients with inadequate decline after the first or second cycle represent a group with a poorer prognosis. The GETUG-13 trial showed that patients with a favorable tumor marker response after one cycle of BEP are likely to be cured in more than 80% of cases if BEP is continued . Patients with inadequate tumor marker response represent an unmet medical need where close monitoring and early salvage strategies should be considered . Importantly, these guidelines recommend that patients with TGCT at high risk of recurrence, as well as those who have relapsed, be treated by multidisciplinary teams in experienced centers . Extragonadal germ cell tumors Extragonadal GCT are rare neoplasms (1–5% of all GCTs) that originate in midline locations such as mediastinum or retroperitoneum, probably from primordial germ cells that fail to migrate to the gonadal ridges during embryonal development . Sacrococcygeal and intracranial GCTs, most common in children and adolescents, are not covered in this guide. Histologic diagnosis and an accurate differential diagnosis with other histologies, such as thymic carcinomas and lymphomas is encouraged. Retroperitoneal GCTs have a similar clinical presentation, prognosis and treatment as disseminated testicular tumors, although they are usually bulky at diagnosis because they are oligosymptomatic in their early stages. Treatment is based on systemic cisplatin-based chemotherapy following the recommendations above for each of the IGCCCG subgroups, as well as the management of residual disease described below . Primary mediastinal GCT have different molecular and clinical features compared to TGCT. Although the prognosis depends on the extent, it appears to be similar to that of TGCT for seminomas and worse for NSGCT. The treatment of mediastinal tumors generally requires a multimodality approach. Chemotherapy is usually given first followed by surgery to remove any residual masses, although the optimal order of these therapies has not been established. Chemotherapy BEP, EP or VIP should be chosen according to the above general recommendations for each IGCCCG prognostic subgroup, balancing the ability to control disease while minimizing the risk of bleomycin toxicity, taking into account the possibility of future mediastinal surgery and the potential need for partial lung resection. Most patients with mediastinal NSGCT have residual masses at the end of chemotherapy. Removal of all residual masses after chemotherapy plays an important role in the treatment of these tumors and should be performed whenever technically possible . For mediastinal seminomas, radiotherapy may be an alternative in patients with contraindications to surgery. Management of post-chemotherapy residual disease Decisions on residual masses after completion of chemotherapy should be made based on the initial histology, location of the residual lesions, and the evolution of tumor markers. Patients with NSGCT, post-chemotherapy negative tumor markers and residual retroperitoneal lymph nodes ≥1 cm in larger axial diameter should undergo surgery, preferentially an open nerve-sparing retroperitoneal lymph node dissection (RPLND). In large residual masses, a full bilateral RPLND is recommended, whereas a modified template RPLND can be considered in cases of low volume before and after CT . FDG-PET-CT is not recommended for the evaluation of residual disease in NSGCT. Pathologic examination of RPLND following chemotherapy demonstrate necrosis in 50% of cases, mature teratoma in 35%, and viable tumor in 15%. Persistent intrathoracic masses as well in other locations should be resected if technically feasible . Although the timing for metastasectomy is not well established, the retroperitoneum is commonly selected as the initial site for resection due to its higher frequency of residual disease. However, pathologic discrepancy between retroperitoneal lymph node and thoracic residual masses is about 30%. Pathologic concordance between the two lungs is greater than 90% . Thus, patients with necrosis in both retroperitoneum and in one side of the lung can avoid contralateral lung surgery . In contrast, active surveillance is recommended for patients with disseminated seminoma and post-chemotherapy residual disease with a larger diameter less than 3 cm. In the rest of patients, a FDG PET/CT should be done at least 6 weeks after the last dose of bleomycin. In case of negative FDG-PET, we recommend active surveillance due to its high negative predictive value (>90%). In case of indeterminate results, we recommend repeating a new PET/TC 8–12 weeks later, due to its limited positive predictive value. If FDG-PET is unequivocally positive, we recommend resection of the residual mass, but due to the limited positive predictive value of FDG-PET, and the difficulty and morbidity of resection of residual masses in seminoma, which often have an associated desmoplastic reaction, some authors propose a biopsy of the lesion to confirm tumor persistence before making a therapeutic decision. Radiotherapy may be an option if residual disease is confirmed and resection is not feasible. Postoperative chemotherapy after resection of residual disease Despite postoperative treatment has not demonstrated to increase overall survival and is controversial, two additional cycles of chemotherapy (EP, VIP or TIP) are commonly recommended for patients with more than 10% of viable tumor in the residual mass, particularly if they were of intermediate or poor-risk disease and/or they had incomplete resection . Choriocarcinoma syndrome and patients at risk of acute respiratory distress syndrome (ARDS) Choriocarcinoma is a highly vascularized tumor with rapid development of extensive metastasis particularly in the lung, but also in the liver, brain, and other organs. Because bleeding leading to ARDS and other severe complications may even be triggered with the first standard cycle of chemotherapy, an initial dose-reduced induction regimen such as a 2- or 3-day EP or baby-BOP (Cisplatin 50 mg/m 2 , vincristine 2 mg, and bleomycin 30 U on day 1) has been recommended. After 14 days of this regimen once the patient is stabilized the full number of cycles should be applied following the induction cycle. If induction EP was used, the remaining additional days of the EP protocol may be administered at day 15 when clinically feasible, before starting standard BEP. Orchiectomy should be performed in all patients with testicular lesions, but if the patient is not stable at the time of diagnosis, chemotherapy should be started and orchiectomy delayed even until the end of systemic treatment. These induction approaches are also valid for other patients with NSGCT and at high risk of ARDS due to extensive lung metastases, dyspnea or hypoxemia at diagnosis. In cases of extensive tumor volume, prevention measures for tumor lysis syndrome is also necessary. Patients unfit for cisplatin Patients who are definitely unfit for cisplatin-based CT can be treated with carboplatin-based chemotherapy, although results are inferior to BEP . In patients with obstructive uropathy, a nephrostomy before initiating CT should be performed to be able to administer cisplatin. Brain metastases Brain metastases occur in about 10% of patients with advanced disease, either in the context of initial metastatic disease, as a part of a systemic relapse or rarely as an isolated site of relapse. Long-term survival of patients presenting with brain metastases at diagnosis is poor (30–50%) and even poorer when a site of recurrent disease (5-year survival rate is 2–5%) . Brain metastases usually require a multimodal approach, although the optimal sequence should be individualized. The general approach for patients with brain metastases is chemotherapy followed by observation in case of complete response, or surgical excision and/or stereotactic radiosurgery in case of small residual disease. In patients with brain metastases at relapse, consolidation RT should be used, even with total response after chemotherapy. Surgery should be considered in case of a persistent solitary metastases but location of metastases, histology of primary tumor and systemic disease status should be considered. Palliative whole-brain radiation therapy is indicated in multiple unresectable lesions . Prophylaxis of thromboembolic events (TEE) TEE occur more frequently in GCT patients receiving chemotherapy than in patients of the same age receiving chemotherapy for other cancers. Retrospective studies identified increasing stage and size of retroperitoneal lymph nodes, as well as Khorana score and indwelling vascular access devices as TEE risk factors . Data regarding the efficacy of thromboprophylaxis are conflicting but despite lacking level-I evidence, prevention of TEE should be particularly considered in patients at higher risk, such as those with retroperitoneal involvement >3.5 cm, stage III or poor prognosis IGCCCG . In addition, vascular access devices should be avoided whenever possible. Growing teratoma syndrome This is a rare condition associated with NSGCT, characterized by an increase in metastatic mass during or after chemotherapy with normalized STM, caused by a mature teratoma with no malignant component. Treatment consists of surgical resection of the lesions . Teratoma with malignant transformation (TMT) TMT into somatic histologies is a rare but significant complication that occurs in less than 6% of metastatic GCTs. This transformation results in the emergence of a variety of non-germ cell histologies, such as adenocarcinoma, squamous cell carcinoma, sarcoma, and others, which may coexist with the original germ cell tumor. When present at metastatic sites, TMT is associated with a poor prognosis. Somatic type of malignancy, grade, extent of disease, feasibility of radical surgery, number of prior chemotherapy lines of treatment, and the primary tumor site had been also proposed as determinants of long-term outcomes . As these tumors are often resistant to standard platinum-based chemotherapy and radiotherapy, their management remains a challenge for clinicians. The most effective therapeutic approach currently available is complete resection, which often requires aggressive and extensive resection, especially when the disease is confined to solitary sites. Adjuvant chemotherapy as well as systemic treatment when complete resection is not possible, should be individualized and tailored to the transformed histology, particularly in sarcomas and primitive neuroectodermal malignant transformation . Extragonadal GCT are rare neoplasms (1–5% of all GCTs) that originate in midline locations such as mediastinum or retroperitoneum, probably from primordial germ cells that fail to migrate to the gonadal ridges during embryonal development . Sacrococcygeal and intracranial GCTs, most common in children and adolescents, are not covered in this guide. Histologic diagnosis and an accurate differential diagnosis with other histologies, such as thymic carcinomas and lymphomas is encouraged. Retroperitoneal GCTs have a similar clinical presentation, prognosis and treatment as disseminated testicular tumors, although they are usually bulky at diagnosis because they are oligosymptomatic in their early stages. Treatment is based on systemic cisplatin-based chemotherapy following the recommendations above for each of the IGCCCG subgroups, as well as the management of residual disease described below . Primary mediastinal GCT have different molecular and clinical features compared to TGCT. Although the prognosis depends on the extent, it appears to be similar to that of TGCT for seminomas and worse for NSGCT. The treatment of mediastinal tumors generally requires a multimodality approach. Chemotherapy is usually given first followed by surgery to remove any residual masses, although the optimal order of these therapies has not been established. Chemotherapy BEP, EP or VIP should be chosen according to the above general recommendations for each IGCCCG prognostic subgroup, balancing the ability to control disease while minimizing the risk of bleomycin toxicity, taking into account the possibility of future mediastinal surgery and the potential need for partial lung resection. Most patients with mediastinal NSGCT have residual masses at the end of chemotherapy. Removal of all residual masses after chemotherapy plays an important role in the treatment of these tumors and should be performed whenever technically possible . For mediastinal seminomas, radiotherapy may be an alternative in patients with contraindications to surgery. Decisions on residual masses after completion of chemotherapy should be made based on the initial histology, location of the residual lesions, and the evolution of tumor markers. Patients with NSGCT, post-chemotherapy negative tumor markers and residual retroperitoneal lymph nodes ≥1 cm in larger axial diameter should undergo surgery, preferentially an open nerve-sparing retroperitoneal lymph node dissection (RPLND). In large residual masses, a full bilateral RPLND is recommended, whereas a modified template RPLND can be considered in cases of low volume before and after CT . FDG-PET-CT is not recommended for the evaluation of residual disease in NSGCT. Pathologic examination of RPLND following chemotherapy demonstrate necrosis in 50% of cases, mature teratoma in 35%, and viable tumor in 15%. Persistent intrathoracic masses as well in other locations should be resected if technically feasible . Although the timing for metastasectomy is not well established, the retroperitoneum is commonly selected as the initial site for resection due to its higher frequency of residual disease. However, pathologic discrepancy between retroperitoneal lymph node and thoracic residual masses is about 30%. Pathologic concordance between the two lungs is greater than 90% . Thus, patients with necrosis in both retroperitoneum and in one side of the lung can avoid contralateral lung surgery . In contrast, active surveillance is recommended for patients with disseminated seminoma and post-chemotherapy residual disease with a larger diameter less than 3 cm. In the rest of patients, a FDG PET/CT should be done at least 6 weeks after the last dose of bleomycin. In case of negative FDG-PET, we recommend active surveillance due to its high negative predictive value (>90%). In case of indeterminate results, we recommend repeating a new PET/TC 8–12 weeks later, due to its limited positive predictive value. If FDG-PET is unequivocally positive, we recommend resection of the residual mass, but due to the limited positive predictive value of FDG-PET, and the difficulty and morbidity of resection of residual masses in seminoma, which often have an associated desmoplastic reaction, some authors propose a biopsy of the lesion to confirm tumor persistence before making a therapeutic decision. Radiotherapy may be an option if residual disease is confirmed and resection is not feasible. Despite postoperative treatment has not demonstrated to increase overall survival and is controversial, two additional cycles of chemotherapy (EP, VIP or TIP) are commonly recommended for patients with more than 10% of viable tumor in the residual mass, particularly if they were of intermediate or poor-risk disease and/or they had incomplete resection . Choriocarcinoma is a highly vascularized tumor with rapid development of extensive metastasis particularly in the lung, but also in the liver, brain, and other organs. Because bleeding leading to ARDS and other severe complications may even be triggered with the first standard cycle of chemotherapy, an initial dose-reduced induction regimen such as a 2- or 3-day EP or baby-BOP (Cisplatin 50 mg/m 2 , vincristine 2 mg, and bleomycin 30 U on day 1) has been recommended. After 14 days of this regimen once the patient is stabilized the full number of cycles should be applied following the induction cycle. If induction EP was used, the remaining additional days of the EP protocol may be administered at day 15 when clinically feasible, before starting standard BEP. Orchiectomy should be performed in all patients with testicular lesions, but if the patient is not stable at the time of diagnosis, chemotherapy should be started and orchiectomy delayed even until the end of systemic treatment. These induction approaches are also valid for other patients with NSGCT and at high risk of ARDS due to extensive lung metastases, dyspnea or hypoxemia at diagnosis. In cases of extensive tumor volume, prevention measures for tumor lysis syndrome is also necessary. Patients who are definitely unfit for cisplatin-based CT can be treated with carboplatin-based chemotherapy, although results are inferior to BEP . In patients with obstructive uropathy, a nephrostomy before initiating CT should be performed to be able to administer cisplatin. Brain metastases occur in about 10% of patients with advanced disease, either in the context of initial metastatic disease, as a part of a systemic relapse or rarely as an isolated site of relapse. Long-term survival of patients presenting with brain metastases at diagnosis is poor (30–50%) and even poorer when a site of recurrent disease (5-year survival rate is 2–5%) . Brain metastases usually require a multimodal approach, although the optimal sequence should be individualized. The general approach for patients with brain metastases is chemotherapy followed by observation in case of complete response, or surgical excision and/or stereotactic radiosurgery in case of small residual disease. In patients with brain metastases at relapse, consolidation RT should be used, even with total response after chemotherapy. Surgery should be considered in case of a persistent solitary metastases but location of metastases, histology of primary tumor and systemic disease status should be considered. Palliative whole-brain radiation therapy is indicated in multiple unresectable lesions . TEE occur more frequently in GCT patients receiving chemotherapy than in patients of the same age receiving chemotherapy for other cancers. Retrospective studies identified increasing stage and size of retroperitoneal lymph nodes, as well as Khorana score and indwelling vascular access devices as TEE risk factors . Data regarding the efficacy of thromboprophylaxis are conflicting but despite lacking level-I evidence, prevention of TEE should be particularly considered in patients at higher risk, such as those with retroperitoneal involvement >3.5 cm, stage III or poor prognosis IGCCCG . In addition, vascular access devices should be avoided whenever possible. This is a rare condition associated with NSGCT, characterized by an increase in metastatic mass during or after chemotherapy with normalized STM, caused by a mature teratoma with no malignant component. Treatment consists of surgical resection of the lesions . TMT into somatic histologies is a rare but significant complication that occurs in less than 6% of metastatic GCTs. This transformation results in the emergence of a variety of non-germ cell histologies, such as adenocarcinoma, squamous cell carcinoma, sarcoma, and others, which may coexist with the original germ cell tumor. When present at metastatic sites, TMT is associated with a poor prognosis. Somatic type of malignancy, grade, extent of disease, feasibility of radical surgery, number of prior chemotherapy lines of treatment, and the primary tumor site had been also proposed as determinants of long-term outcomes . As these tumors are often resistant to standard platinum-based chemotherapy and radiotherapy, their management remains a challenge for clinicians. The most effective therapeutic approach currently available is complete resection, which often requires aggressive and extensive resection, especially when the disease is confined to solitary sites. Adjuvant chemotherapy as well as systemic treatment when complete resection is not possible, should be individualized and tailored to the transformed histology, particularly in sarcomas and primitive neuroectodermal malignant transformation . Patients with stage I disease at diagnosis followed by surveillance, RPLND, radiotherapy, carboplatin, and even those who have received one cycle of BEP, should be treated at relapse according to standard recommendations for first-line advanced disease, but taking into account the previous cumulative dose of bleomycin. RPLND may be an option in patients with NSGCT if teratoma is suspected (depending on the presence of tumor markers and the extent of disease at relapse) . Approximately 18–20% of patients with advanced TGCT are refractory or relapse after first-line chemotherapy and require additional salvage therapies, with 5-year PFS rates ranging from 54 to 90% depending on IGCCCG subgroups. These patients are still potentially curable, albeit in a much smaller proportion than first-line patients, and should preferably be treated by experienced teams in reference centres. Patients who fail first-line cisplatin-based chemotherapy should be classified according to the International Prognostic Factor Study Group (IPFSG) classification, a risk prognostic model based on a large retrospective series (Table ). The IPFSG established five prognostic categories based on primary site, previous response to therapy, progression-free interval, tumor marker levels, histology, and presence of metastases in the liver, bone or brain. Two-year progression-free survival rates ranged from 75% in the very low-risk group to 6% in the very high-risk subgroup . There are two main options for salvage treatment of these patients: conventional dose chemotherapy (CDCT), and high dose chemotherapy (HDCT). Approximately one third of patients treated with one of these regimens become long-term survivors. Although the best regimen and strategy for each IPFSG patient subgroup is not yet well known, retrospective analyses suggest that HDCT may be superior to CDCT in the majority of patients, but the toxicity associated with HDCT can be significant and its benefit has not been clearly demonstrated. The results of the large randomised phase III TIGER trial comparing the two strategies are eagerly awaited. In the meantime, both options are considered valid. It is important to note that surgery of all residual lesions after chemotherapy must be performed in all cases if technically feasible, regardless of the type of treatment administered . In the CDCT approach, the most commonly used salvage regimen after BEP is four cycles of TIP with GCSF support . The regimen VeIP and VIP, with vinblastine or etoposide respectively instead of paclitaxel, could be an alternative in some patients (Table ). Two main strategies of HDCT are available for these patients. One of them consists of two cycles of HDCT with carboplatin and etoposide with autologous peripheral-blood hematopoietic stem cells support preceded by one or two cycles of standard-dose chemotherapy with VeIP or VIP, that are used for leukapheresis of peripheral-blood stem cells . The other approach is the TICE regimen, that included two cycles of paclitaxel plus ifosfamide with leukapheresis, followed by three cycles of high-dose carboplatin plus etoposide with reinfusion of peripheral-blood stem cells . Some patients who progress after CDCT can be rescued with HDCT as a second or subsequent salvage therapy. In the remainder of patients, including those who progress after HDCT, subsequent lines are usually palliative and only occasionally lead to long-term survival. Clinical trials, including early-phase trials, should be prioritized in this scenario. Treatments commonly used in patients who progress after HDCT and when a clinical trial is not an option include: paclitaxel–gemcitabine, oxaliplatin–gemcitabine (GEMOX) or oral etoposide. Some patients progressing after CDCT can be rescued with HDCT as second or subsequent salvage therapy. In the rest of patients, including those who progress after HDCT, subsequent lines are usually palliative, and only occasionally they result in long-term survival. Clinical trials should be prioritized in this scenario, including early-phase clinical trials. Treatments commonly used in patients in progression to HDCT and, when a clinical trial is not an option, include: paclitaxel–gemcitabine, oxaliplatin-gemcitabine (GEMOX) or oral etoposide . Late relapse after first-line chemotherapy, defined as tumor recurrence more than 2 years after primary systemic treatment, represents a special situation characterized by a higher degree of resistance to chemotherapy. In these cases, early complete surgical resection is the mainstay of treatment whenever possible. However, salvage chemotherapy is usually also required in conjunction with surgery . Given the good treatment outcomes of TGCT, a large population of young long-term survivors is to be expected. These patients require an appropriate follow-up program that balances efficacy to detect relapses early, without an excessive burden of visits to facilitate adherence and with as little radiation exposure as possible related to imaging tests. In recent years, there has been increasing interest in adopting less intensive imaging strategies, especially in stage I tumors. The following paragraphs and Table summarize the SEOM-Grupo Germinal recommendations based on the most recent evidence and compiling endorsements from other groups with broad expertise in the management of this disease . It is important to notice that no single follow-up plan is appropriate for all patients, and the following recommendations are to provide guidance, and should be adapted to each individual patient. Clinical Stage I Seminomas (CSIS) Cure rates for CSIS are close to 100% regardless of the initial approach, which includes either surveillance or adjuvant carboplatin after orchidectomy. Recurrences occur in approximately 6–20% and 3–6% after surveillance and adjuvant carboplatin, respectively. Most of these relapses (75–95%) are observed within the first 2–3 years and >95% within 5 years, with a median time to relapse of 14–21 months. In terms of location, most patients (90%) relapse in the retroperitoneum, and therefore cross-sectional imaging is the main means of detection. Conversely, the frequency of recurrences detected exclusively by other methods is anecdotal, as only 0–5%, 0%, and 5–10% are diagnosed by clinical examination, chest x-ray, or serum tumor marker, respectively . These observations have shaped over the years the follow-up recommendations. The SEOM-Grupo Germinal proposal for CSIS is to adapt the follow-up schedule according to the treatment option utilized in this clinical setting that conditions the risk of recurrence [i.e., active surveillance or adjuvant carboplatin]. Although physical exam and serum tumor markers (STM) are also included in the recommendations the critical component is the cross-sectional imaging of the abdomen and pelvis as more than 90% of the relapses will occur in the retroperitoneum. In general, for patients who opted for active surveillance, imaging of the abdomen and pelvis is recommended every 6 months for the first 3 years and then annually in years 4 and 5. In the other hand for those patients who received adjuvant carboplatin imaging of the abdomen and pelvis is recommended less intensively, every 6 months only the first year and then annually on years 2 and 3, omit year 4 and perform an imaging test at the end of year five. Contrary to our previous guideline, imaging of the chest is no longer routinely recommended. After 5 years, follow-up needs to be individualized as no consensus exist in the literature. Testicular ultrasound should be considered in years 3 and 5 in the presence of a normal contralateral testis, or more frequently in patients with risk factors or previous abnormal ultrasound findings such as microcalcifications. Clinical Stage I NSGCT General follow-up recommendations should be individualized according to the presence or absence of factors that increase risk of recurrences and treatment received. For patients who opt for exclusively active surveillance and no treatment intervention, we recommend a more intense follow-up. Thus, during the first year when the risk of recurrence is the highest every 2 months visits with STM and quarterly imaging tests are recommended. The second-year frequency of visits can be extended to every 3 months with imaging performed only every 6 months. Given the rarity in CSINS patients of relapses beyond 2 years no cross-sectional imaging is recommended in years 3 and 4 where visits with STM will be every 4 months and once a year respectively. During year 5, yearly visits with a final imaging evaluation in month 60 is recommended. For those patients who opted for adjuvant BEP the frequency of visits and STM is less intense recommending every 3 months during the first 2 years and then switching to every 6 months in years three and four and yearly in year five. Cross-sectional imaging likewise is recommended with less frequency reducing the total number of tests in this group and therefore an imaging test is recommended every 6 months in year one and then yearly in years 2 and three, omitting year four and performing an imaging test at the end of year five. After 5 years, follow-up needs to be individualized both for CSIS and CSINS as no consensus exist in any of the two groups. Advanced seminoma When advanced disease, overall benefit in patients with seminoma after chemotherapy is high with around two thirds of patients achieving a favorable response including 30% of complete responses. It is estimated than less than 20% of patients experience relapse after systemic treatment with a median of 9 months with the retroperitoneum and lung as the most common relapse sites with 90% and 10% respectively . This relapse profile defines the current follow-up recommendation that changes in comparison with NSGCT with less frequent visits but longer imaging follow-up and with variable evaluation of the chest as summarized in Table . Advanced NSGCT After achieving a favorable response, it is estimated that around 20% of patients with NSGCT might relapse. Recurrences differ from seminomas in shorter timing [median time to relapse of 3 months and most relapses within the first 2 years], broader location [retroperitoneum (33%), pelvis (25%) and lung (33%)], and value of STM [three quarters of recurrences can be detected by TM] . All these particularities lead to a slightly different follow up schema that is illustrated in Table . Additional recommendations Testicular ultrasound and self-examination should be included in the follow-up. Approximately 1–5% of patients with a prior history of testicular cancer will develop a contralateral testicular cancer in the next 20 years, with >25% of metachronous TGCT presenting ≥10 years after 1st TGCT . Testicular ultrasound should be considered in years 3 and 5 in the presence of a normal contralateral testis, or more frequently in patients with risk factors or previous abnormal ultrasound findings such as microcalcifications. On the other hand, new strategies are being developed to reduce the risk of cumulative radiation exposure. In this sense, replacing CT with MRI, using low-dose non-contrast CT and avoiding chest x-rays may be safe, at least for low-risk patients . As poor adherence to post-treatment follow-up protocols can be associated with higher rates of relapse, delay in definitive therapy and unnecessary morbidity, a number of strategies are being developed to improve adherence, such as reducing the number of hospital visits and tests, or incorporating new technologies such as mobile health (m-health) . Post 5-year follow-up lacks consensus and requires individual patient assessment. For chemotherapy-treated patients, the emphasis transitions from detecting tumor recurrence to managing late treatment effects and promoting overall health. Patients should be motivated to lead a healthy lifestyle to mitigate the risk of severe late effects such as secondary cancers and cardiovascular disease. Finally, it is expected that in the near future, the incorporation of new biomarkers predictive of residual disease or relapse (e.g., miR-371a-3p) will allow better prediction of the risk of recurrence and facilitate follow-up, reducing costs, and exposure to ionizing radiation . Cure rates for CSIS are close to 100% regardless of the initial approach, which includes either surveillance or adjuvant carboplatin after orchidectomy. Recurrences occur in approximately 6–20% and 3–6% after surveillance and adjuvant carboplatin, respectively. Most of these relapses (75–95%) are observed within the first 2–3 years and >95% within 5 years, with a median time to relapse of 14–21 months. In terms of location, most patients (90%) relapse in the retroperitoneum, and therefore cross-sectional imaging is the main means of detection. Conversely, the frequency of recurrences detected exclusively by other methods is anecdotal, as only 0–5%, 0%, and 5–10% are diagnosed by clinical examination, chest x-ray, or serum tumor marker, respectively . These observations have shaped over the years the follow-up recommendations. The SEOM-Grupo Germinal proposal for CSIS is to adapt the follow-up schedule according to the treatment option utilized in this clinical setting that conditions the risk of recurrence [i.e., active surveillance or adjuvant carboplatin]. Although physical exam and serum tumor markers (STM) are also included in the recommendations the critical component is the cross-sectional imaging of the abdomen and pelvis as more than 90% of the relapses will occur in the retroperitoneum. In general, for patients who opted for active surveillance, imaging of the abdomen and pelvis is recommended every 6 months for the first 3 years and then annually in years 4 and 5. In the other hand for those patients who received adjuvant carboplatin imaging of the abdomen and pelvis is recommended less intensively, every 6 months only the first year and then annually on years 2 and 3, omit year 4 and perform an imaging test at the end of year five. Contrary to our previous guideline, imaging of the chest is no longer routinely recommended. After 5 years, follow-up needs to be individualized as no consensus exist in the literature. Testicular ultrasound should be considered in years 3 and 5 in the presence of a normal contralateral testis, or more frequently in patients with risk factors or previous abnormal ultrasound findings such as microcalcifications. General follow-up recommendations should be individualized according to the presence or absence of factors that increase risk of recurrences and treatment received. For patients who opt for exclusively active surveillance and no treatment intervention, we recommend a more intense follow-up. Thus, during the first year when the risk of recurrence is the highest every 2 months visits with STM and quarterly imaging tests are recommended. The second-year frequency of visits can be extended to every 3 months with imaging performed only every 6 months. Given the rarity in CSINS patients of relapses beyond 2 years no cross-sectional imaging is recommended in years 3 and 4 where visits with STM will be every 4 months and once a year respectively. During year 5, yearly visits with a final imaging evaluation in month 60 is recommended. For those patients who opted for adjuvant BEP the frequency of visits and STM is less intense recommending every 3 months during the first 2 years and then switching to every 6 months in years three and four and yearly in year five. Cross-sectional imaging likewise is recommended with less frequency reducing the total number of tests in this group and therefore an imaging test is recommended every 6 months in year one and then yearly in years 2 and three, omitting year four and performing an imaging test at the end of year five. After 5 years, follow-up needs to be individualized both for CSIS and CSINS as no consensus exist in any of the two groups. When advanced disease, overall benefit in patients with seminoma after chemotherapy is high with around two thirds of patients achieving a favorable response including 30% of complete responses. It is estimated than less than 20% of patients experience relapse after systemic treatment with a median of 9 months with the retroperitoneum and lung as the most common relapse sites with 90% and 10% respectively . This relapse profile defines the current follow-up recommendation that changes in comparison with NSGCT with less frequent visits but longer imaging follow-up and with variable evaluation of the chest as summarized in Table . After achieving a favorable response, it is estimated that around 20% of patients with NSGCT might relapse. Recurrences differ from seminomas in shorter timing [median time to relapse of 3 months and most relapses within the first 2 years], broader location [retroperitoneum (33%), pelvis (25%) and lung (33%)], and value of STM [three quarters of recurrences can be detected by TM] . All these particularities lead to a slightly different follow up schema that is illustrated in Table . Testicular ultrasound and self-examination should be included in the follow-up. Approximately 1–5% of patients with a prior history of testicular cancer will develop a contralateral testicular cancer in the next 20 years, with >25% of metachronous TGCT presenting ≥10 years after 1st TGCT . Testicular ultrasound should be considered in years 3 and 5 in the presence of a normal contralateral testis, or more frequently in patients with risk factors or previous abnormal ultrasound findings such as microcalcifications. On the other hand, new strategies are being developed to reduce the risk of cumulative radiation exposure. In this sense, replacing CT with MRI, using low-dose non-contrast CT and avoiding chest x-rays may be safe, at least for low-risk patients . As poor adherence to post-treatment follow-up protocols can be associated with higher rates of relapse, delay in definitive therapy and unnecessary morbidity, a number of strategies are being developed to improve adherence, such as reducing the number of hospital visits and tests, or incorporating new technologies such as mobile health (m-health) . Post 5-year follow-up lacks consensus and requires individual patient assessment. For chemotherapy-treated patients, the emphasis transitions from detecting tumor recurrence to managing late treatment effects and promoting overall health. Patients should be motivated to lead a healthy lifestyle to mitigate the risk of severe late effects such as secondary cancers and cardiovascular disease. Finally, it is expected that in the near future, the incorporation of new biomarkers predictive of residual disease or relapse (e.g., miR-371a-3p) will allow better prediction of the risk of recurrence and facilitate follow-up, reducing costs, and exposure to ionizing radiation . Although 95% of patients with TGCT are cured, survivors face potential late adverse effects and reduced quality of life. The frequency and severity of specific adverse events have been combined into a cumulative burden of morbidity (CBM) score for patients who had received cisplatin-based chemotherapy. At a median follow-up of 4.2 years 20% had a high/severe CBM score, and only 5% had no adverse health outcomes. Therefore, understanding the risk of long-term effects of therapy is important to optimize care in this population . Secondary neoplasms, infertility, cardiovascular toxicity, metabolic syndrome, specific sequelae of chemotherapy including neurotoxicity, ototoxicity, pulmonary and renal toxicity, and psychosocial distress associating anxiety and sexual dysfunction are the major long-term toxicities in this population . The relative risk of a second non-germ cell solid tumor is approximately doubled after radiotherapy or chemotherapy and usually occurs more than 10 years after treatment. The most common associated solid tumors are of gastrointestinal, urinary tract and soft tissue origin. The estimated cumulative risk of leukemia is 0.5 and 2% after cumulative etoposide doses of <2 and >2 g/m 2 , respectively, and occurs within 10 years of treatment. The relative risk of a second solid non-germ-cell tumor is approximately doubled after radiotherapy or chemotherapy and usually occur more than 10 years after treatment. The most frequently related solid tumors are of gastrointestinal, urinary tract and soft tissue origin. The estimated cumulative risk of leukemia is 0.5 and 2% after cumulative etoposide doses of <2 and >2 g/m 2 , respectively and emerge within 10 years after treatment . Metabolic syndrome affects 8–32% of long-term TGCT survivors, who have almost double the risk compared to controls. Male hypogonadism is observed in 11–35% of this population. Several studies have shown an association between metabolic syndrome and chemotherapy and low testosterone levels in TGCT survivors . Patients should be counseled on healthy lifestyle, smoking cessation, physical activity and monitoring of blood pressure, cholesterol and testosterone levels during follow-up. Chemotherapy-induced cardiovascular toxicity is the result of direct endothelial damage induced by cisplatin and indirect hormonal and metabolic changes . Compared with the general population, patients with TGCT who received chemotherapy had a significantly higher relative risk of cardiovascular disease, ranging from 1.4 to 7.1. The incidence of angina, myocardial infarction or sudden cardiac death was 7%. Increased cardiovascular mortality (both from heart disease and cerebrovascular disease) was not associated with TGCT but with cisplatin-based chemotherapy, especially during treatment and at 10 years . Pre-existing fertility problems can be exacerbated by chemotherapy, extended field radiotherapy and RPLND and are further reduced by treatment with combined modalities with high doses of cisplatin (>850 mg). Population-based studies in TGCT survivors have shown a slightly reduced overall fertility and more frequent use of assisted reproductive technology with a success rate of 50%. No increased risk of malformations has been found in children of TGCT survivors . Long-term cisplatin-induced peripheral neuropathy was seen in 20–30% of patients 5–10 years after treatment and was associated with cumulative cisplatin dose, age, smoking and alcoholism. Symptomatic ototoxicity is also common, including tinnitus (59%), hearing loss (18%) or both (23%). Half of patients who received a cumulative cisplatin dose >400 mg/m 2 reported tinnitus and hearing loss. Finally, other toxicities, generally dose related, are more common in TGCT survivors than in the general male population. These include some degree of renal impairment (up to 30%), pulmonary fibrosis (5–10% of patients treated with bleomycin, which can be fatal in 1%), chronic fatigue (17%), anxiety disorders (17–38%), clinically significant depression (5–12%) . Oncologists should be aware of all these possible complications that may occur in long-term survivors to counsel patients with preventive measures and, if necessary, to provide early diagnosis and treatment. The current study has been performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments. |
Person-centered abortion care scale: Validation for medication abortion in the United States | 452448db-955c-44e0-8011-0c8aea2eaa25 | 11849315 | Surgical Procedures, Operative[mh] | Introduction In June 2022, the US Supreme Court overturned the 1973 Roe v. Wade decision, eliminating federal protections for the provision of abortion. As of February 2024, abortion is illegal in 14 states and at risk of being highly restricted or banned in 26 states total . Amidst these restrictions and COVID-era changes in healthcare delivery, medication abortions now comprise 53% of all facility-based abortions . While the safety and efficacy of medication abortion are well-established, there are no validated tools to assist health care providers in ensuring high quality and person-centered medication abortion care . Quality of care is defined by the Institute of Medicine as care that is safe, effective, person-centered, timely, efficient, and equitable . In particular, person-centered care, or care that is respectful of and responsive to individuals’ preferences, needs, and values , is globally recognized as a distinct component of quality abortion care . However, there is a lack of validated measures for abortion quality, particularly person-centered care . Abortion care that is not person-centered can take many forms (e.g., discrimination; lack of pain management) with disproportionate impacts on people of color and other marginalized communities . Those of lower socioeconomic status and in highly restrictive legal settings with deeply embedded social stigma are also more likely to experience care that is not person-centered, contributing to health inequities . Person-centered care is essential not only as a human right, but is also positively associated with clinical outcomes and adherence to post-abortion guidance . The US Food and Drug Administration (FDA) approved direct mailing of medication abortion pills to patients through telemedicine options in April 2021, temporarily modifying the in-person dispensing requirement due to the COVID-19 pandemic. This long-standing requirement, which mandated that the first medication abortion pill (mifepristone) be administered in-person under a clinician’s supervision, was removed by the FDA in January 2023 . Telemedicine medication abortion is safe, effective, and acceptable . This is in line with the rise in availability and use of telemedicine services across a number of health sectors in recent years, from perinatal mental health to substance use to primary care services . There is a growing body of literature suggesting that telemedicine advances person-centered approaches by increasing patient satisfaction, decreasing barriers to care , and meeting peoples’ preferences for how they receive care (e.g. in clinic vs telemedicine) . To our knowledge, only two studies have examined person-centered care for telemedicine medication abortions. The studies found that it increased pregnant peoples’ options, autonomy and access to timely abortion care by removing healthcare barriers related to geographic distance, childcare, or employment . However, both studies were qualitative; more empirical, nuanced understanding of different aspects of person-centeredness is needed, including potential limitations of a telemedicine approach. This study aims to adapt and validate a scale to measure person-centered care for in-person and telemedicine medication abortions in California . Methods The Person-Centered Abortion Care (PCAC) scale, which was developed and validated in Kenya , served as the basis for adaptation and validation in the US context (US-PCAC). We employed a standard sequential approach to scale development described in detail below . This study was conducted at a large urban academic health clinic wherein eligible patients were given the option to have a medication abortion by telemedicine or in person between June 2018–December 2022. 2.1. Defining domains and expert reviews A technical advisory committee (TAC), comprised of 12 experts (i.e., US-based abortion service providers and researchers) reviewed domains of person-centered reproductive healthcare and the original 26-item PCAC scale developed in Kenya . Domains are the major constructs of person-centered care defined by the literature . Additionally, we conducted a literature review on recent measures of abortion experiences. From the original list, the expert reviewers modified, added, and deleted items, ultimately expanding the list to 44 items. In a follow-up TAC meeting and subsequent training with interviewers, TAC and study team members consolidated items from 44 to 37 items to reduce redundancy and omit items considered to be less relevant in a US setting. 2.2. Cognitive interviews In total, 37 items were tested during the cognitive interviews. Input from the cognitive interviews ( n = 12) included suggestions for slight wording changes (e.g., adding more specificity to questions regarding wait times), consistency of response options across items, verifying that terms were understood and resonated with the experiences of participants across modalities (e.g., “Did you feel seen and heard by the healthcare team?”), and removing items that were duplicative or less relevant according to study participants (e.g. Did you feel like you were physically treated roughly?). In total, seven items were removed that were duplicative or less relevant. Additional changes to items were based on more substantive input from participants; for example, two items were revised to use the term “decisions” versus “decision” to differentiate between the multiple decisions required in the medication abortion process (e.g., where and when they wanted to take the medication, methods for managing pain) versus the larger “decision” to have an abortion or to have a medication versus procedural abortion. Moreover, three items were added based on participant feedback including “Do you feel you were provided with enough information on what to expect regarding pain or discomfort that could arise from the procedure?” “Did you feel that you could confide in the health care team regarding personal or sensitive information?” and “Did you feel that the healthcare team showed that they care about you?”. 2.3. Person-centered abortion care survey The eligibility criteria for the US-PCAC survey were as follows: (1) had either a medication abortion via telemedicine with no exam or ultrasound between April 1, 2020 and December 31, 2022 (referred in text as “telemedicine” patients) or an in-person medication abortion between June 1, 2018 and December 31, 2022 (referred in text as “in-person” patients); (2) 6-weeks or more from completion of abortion to be able to report outcomes (e.g. abortion completion); (3) able to take the online survey in English; and (4) 18 years or older at time of recruitment. The longer timeframe for the in-person sample was to allow for sufficient sample size given the limited number of abortions performed during the study period. The study team consulted with the clinic’s Clinical and Translational Science Institute (CTSI) biomedical informatics team to obtain lists of eligible participants. Each eligible participant received a recruitment message containing the personal Qualtrics survey link and passcode via email and/or through the hospital’s secure messaging platform. Participants were directed to the informed consent online page. Once they agreed to participate, they were directed to the 20-minute online survey that included questions on demographics, social, and health outcomes, in addition to the PCAC items. The survey was conducted from December 2021 to March 2023. A total of 970 patients were contacted, with up to three follow-up reminders. The final sample size was 182 participants (147 in-person and 45 telemedicine patients) resulting in a participation rate of 18.8%. A general rule for minimum sample size needed for conducting factor analysis is three participants times the number of items . Our target sample size was 150 participants per modality (for total of 300 participants), but this was not achieved due to the low volume of abortions in the clinic, particularly for the telemedicine sample. Each respondent who completed the survey was given a $20 electronic gift card. 2.4. Psychometric analyses In total, 33 US-PCAC items were included, with two telemedicine specific items and one in-person specific item (see ). We ran a series of factor analyses for the full sample and then separately for in-person and telemedicine participants. Missingness was low across all variables (< 5%) and we therefore used complete case analysis (see ). All analyses were conducted using Stata . Negative items were reverse coded so that negative responses were coded 0 and best responses coded 3 to obtain a uniform scale. We constructed a correlation matrix and examined item-test correlation, item-rest correlation, and alpha to assess reliability. We then conducted exploratory factor analysis. We first assessed a one-factor solution to assess if there was a global measure of US-PCAC. We examined factor loadings of each item, using a cutoff of 0.30 to determine which items to delete or retain. We used oblique rotation because of the naturally occurring correlation between the rotated factors . We used a scree plot, eigenvalues of factors, and conceptual justifications (e.g., examining how each item is understood and theorized in existing literature) to determine the number of factors to retain. We first assessed a scree plot to visually inspect factors with eigenvalues greater than 1.0 . Cronbach’s alpha was used to examine internal consistency for the full and sub-scales, with 0.70 considered to be acceptable reliability . We named sub-domains based on the factors and what is known from existing literature . Lastly, we examined criterion validity by assessing bivariate associations between US-PCAC scales and “satisfaction,” a perceived quality of care measure often used to assess quality outcome . Satisfaction was measured by the question, “Overall, how satisfied were you with the entire process?” Response options corresponded to a four-point Likert-type scale (Not at all, Somewhat, Very, or Extremely) and were dichotomized (0 = Not at all/Somewhat satisfied vs 1 = Very/Extremely Satisfied). We used logistic regression to assess bivariate associations. We also conducted sensitivity analyses using a continuous satisfaction score and results did not differ. All study procedures were reviewed and approved by an Institutional Review Board and informed consent was received by all participants. Defining domains and expert reviews A technical advisory committee (TAC), comprised of 12 experts (i.e., US-based abortion service providers and researchers) reviewed domains of person-centered reproductive healthcare and the original 26-item PCAC scale developed in Kenya . Domains are the major constructs of person-centered care defined by the literature . Additionally, we conducted a literature review on recent measures of abortion experiences. From the original list, the expert reviewers modified, added, and deleted items, ultimately expanding the list to 44 items. In a follow-up TAC meeting and subsequent training with interviewers, TAC and study team members consolidated items from 44 to 37 items to reduce redundancy and omit items considered to be less relevant in a US setting. Cognitive interviews In total, 37 items were tested during the cognitive interviews. Input from the cognitive interviews ( n = 12) included suggestions for slight wording changes (e.g., adding more specificity to questions regarding wait times), consistency of response options across items, verifying that terms were understood and resonated with the experiences of participants across modalities (e.g., “Did you feel seen and heard by the healthcare team?”), and removing items that were duplicative or less relevant according to study participants (e.g. Did you feel like you were physically treated roughly?). In total, seven items were removed that were duplicative or less relevant. Additional changes to items were based on more substantive input from participants; for example, two items were revised to use the term “decisions” versus “decision” to differentiate between the multiple decisions required in the medication abortion process (e.g., where and when they wanted to take the medication, methods for managing pain) versus the larger “decision” to have an abortion or to have a medication versus procedural abortion. Moreover, three items were added based on participant feedback including “Do you feel you were provided with enough information on what to expect regarding pain or discomfort that could arise from the procedure?” “Did you feel that you could confide in the health care team regarding personal or sensitive information?” and “Did you feel that the healthcare team showed that they care about you?”. Person-centered abortion care survey The eligibility criteria for the US-PCAC survey were as follows: (1) had either a medication abortion via telemedicine with no exam or ultrasound between April 1, 2020 and December 31, 2022 (referred in text as “telemedicine” patients) or an in-person medication abortion between June 1, 2018 and December 31, 2022 (referred in text as “in-person” patients); (2) 6-weeks or more from completion of abortion to be able to report outcomes (e.g. abortion completion); (3) able to take the online survey in English; and (4) 18 years or older at time of recruitment. The longer timeframe for the in-person sample was to allow for sufficient sample size given the limited number of abortions performed during the study period. The study team consulted with the clinic’s Clinical and Translational Science Institute (CTSI) biomedical informatics team to obtain lists of eligible participants. Each eligible participant received a recruitment message containing the personal Qualtrics survey link and passcode via email and/or through the hospital’s secure messaging platform. Participants were directed to the informed consent online page. Once they agreed to participate, they were directed to the 20-minute online survey that included questions on demographics, social, and health outcomes, in addition to the PCAC items. The survey was conducted from December 2021 to March 2023. A total of 970 patients were contacted, with up to three follow-up reminders. The final sample size was 182 participants (147 in-person and 45 telemedicine patients) resulting in a participation rate of 18.8%. A general rule for minimum sample size needed for conducting factor analysis is three participants times the number of items . Our target sample size was 150 participants per modality (for total of 300 participants), but this was not achieved due to the low volume of abortions in the clinic, particularly for the telemedicine sample. Each respondent who completed the survey was given a $20 electronic gift card. Psychometric analyses In total, 33 US-PCAC items were included, with two telemedicine specific items and one in-person specific item (see ). We ran a series of factor analyses for the full sample and then separately for in-person and telemedicine participants. Missingness was low across all variables (< 5%) and we therefore used complete case analysis (see ). All analyses were conducted using Stata . Negative items were reverse coded so that negative responses were coded 0 and best responses coded 3 to obtain a uniform scale. We constructed a correlation matrix and examined item-test correlation, item-rest correlation, and alpha to assess reliability. We then conducted exploratory factor analysis. We first assessed a one-factor solution to assess if there was a global measure of US-PCAC. We examined factor loadings of each item, using a cutoff of 0.30 to determine which items to delete or retain. We used oblique rotation because of the naturally occurring correlation between the rotated factors . We used a scree plot, eigenvalues of factors, and conceptual justifications (e.g., examining how each item is understood and theorized in existing literature) to determine the number of factors to retain. We first assessed a scree plot to visually inspect factors with eigenvalues greater than 1.0 . Cronbach’s alpha was used to examine internal consistency for the full and sub-scales, with 0.70 considered to be acceptable reliability . We named sub-domains based on the factors and what is known from existing literature . Lastly, we examined criterion validity by assessing bivariate associations between US-PCAC scales and “satisfaction,” a perceived quality of care measure often used to assess quality outcome . Satisfaction was measured by the question, “Overall, how satisfied were you with the entire process?” Response options corresponded to a four-point Likert-type scale (Not at all, Somewhat, Very, or Extremely) and were dichotomized (0 = Not at all/Somewhat satisfied vs 1 = Very/Extremely Satisfied). We used logistic regression to assess bivariate associations. We also conducted sensitivity analyses using a continuous satisfaction score and results did not differ. All study procedures were reviewed and approved by an Institutional Review Board and informed consent was received by all participants. Results A total of 182 participants completed all PCAC scale items, including 137 in-person participants and 45 telemedicine participants. Demographic characteristics are presented in . 3.1. Exploratory factor analysis (EFA) Due to low item-correlation and factor loading, we removed “respect support person” (health care team respectful towards support person). For the full sample of participants, we assessed all items excluding the dropped item “respect support person” (29 items). After examining the eigenvalues for each item, we found that items fit better onto a three-factor solution. Oblique rotation indicated three factors and 15 items with a factor loading > 0.3 loaded positively onto one of the three factors: seven items onto Factor 1 corresponding to the Respect and Dignity sub-domain; five items onto Factor 2 corresponding to Responsive and Supportive Care; three items onto Factor 3 corresponding to Communication and Autonomy. Of items that cross-loaded to more than one factor above the cutoff, we categorized four items based on the factor with the higher loading. The remaining items were categorized on conceptual reasoning. Four items loaded highest on Factor 1 but were categorized into other factors for conceptual reasons: “Confidential” (feeling that the health care team kept health information confidential) was categorized into Factor 2; and “Involved,” (provider involved in decisions about care) “Questions,” (could ask health care team any questions) and “Answers” (get answers to all questions in a satisfactory manner) into Factor 3. Three items that loaded highest onto Factor 4 were categorized for conceptual reasons: “Treat negatively” (treated negatively based on identities or characteristics) was grouped into Factor 1, “Overhear” into Factor 2, and “Coerced” (coerced into a decision) into Factor 3. Two items “Support person” and “Language understand” (health care team spoke in understandable language and manner) loaded under the cutoff but was categorized into Factor 1 on a conceptual basis. presents oblique rotated factor loadings for the 29 items in the final US-PCAC scale and summarizes final decisions for each item’s subdomain. We also conducted factor analyses separately for the in-person and telemedicine samples. For the in-person sample, one additional item “Exams private” (covered up during exams) was added and had a factor loading > 0.3 in Factor 1. For the telemedicine sample, two additional items were administered “Communicate telemedicine” (communicate effectively using telemedicine portal) and “Telemedicine private” (telemedicine visit felt private and secure). “Communicate telemedicine” had a factor loading of 0.4971 in Factor 2 but was categorized under Communication and Autonomy for conceptual reasons. “Telemedicine private” loaded under the cutoff but was retained and categorized under Responsive and Supportive care on conceptual bases. presents standardized alphas and scale descriptive statistics for the full US-PCAC scale and subscales, standardized to a 100-point scale. For the full sample, the standardized alpha for the 29-item PCAC scale was 0.95 (mean score = 87.86, SD = 15.03; Range 20.69–100). For the in-person sample, the standardized alpha for the 30-item PCAC scale was also 0.95 (mean score = 85.94, SD = 16.14; Range 22.22–100). Due to the small sample of telemedicine participants, the standardized alpha was unable to be calculated for the 31-item US-PCAC scale. The unstandardized alpha for the telemedicine 31-item scale was 0.86 (mean score = 94.24, SD = 7.22; Range = 61.29–100). 3.2. Criterion validity In bivariate results, among the full sample, each one-unit increase in total standardized US-PCAC score was associated with a 1.11 times (95% CI: 1.07, 1.15) higher odds of satisfaction . All PCAC subscales were also positively associated with satisfaction: Respect and Dignity (OR = 1.06, 95% CI: 1.04, 1.09), Responsive and Supportive Care (OR = 1.06, 95% CI: 1.03, 1.09), and Communication and Autonomy (OR = 1.12, 95% CI: 1.08, 1.17). For the in-person subsample, participants who reported greater satisfaction with the entire process had significantly higher US-PCAC total and subscale scores. Exploratory factor analysis (EFA) Due to low item-correlation and factor loading, we removed “respect support person” (health care team respectful towards support person). For the full sample of participants, we assessed all items excluding the dropped item “respect support person” (29 items). After examining the eigenvalues for each item, we found that items fit better onto a three-factor solution. Oblique rotation indicated three factors and 15 items with a factor loading > 0.3 loaded positively onto one of the three factors: seven items onto Factor 1 corresponding to the Respect and Dignity sub-domain; five items onto Factor 2 corresponding to Responsive and Supportive Care; three items onto Factor 3 corresponding to Communication and Autonomy. Of items that cross-loaded to more than one factor above the cutoff, we categorized four items based on the factor with the higher loading. The remaining items were categorized on conceptual reasoning. Four items loaded highest on Factor 1 but were categorized into other factors for conceptual reasons: “Confidential” (feeling that the health care team kept health information confidential) was categorized into Factor 2; and “Involved,” (provider involved in decisions about care) “Questions,” (could ask health care team any questions) and “Answers” (get answers to all questions in a satisfactory manner) into Factor 3. Three items that loaded highest onto Factor 4 were categorized for conceptual reasons: “Treat negatively” (treated negatively based on identities or characteristics) was grouped into Factor 1, “Overhear” into Factor 2, and “Coerced” (coerced into a decision) into Factor 3. Two items “Support person” and “Language understand” (health care team spoke in understandable language and manner) loaded under the cutoff but was categorized into Factor 1 on a conceptual basis. presents oblique rotated factor loadings for the 29 items in the final US-PCAC scale and summarizes final decisions for each item’s subdomain. We also conducted factor analyses separately for the in-person and telemedicine samples. For the in-person sample, one additional item “Exams private” (covered up during exams) was added and had a factor loading > 0.3 in Factor 1. For the telemedicine sample, two additional items were administered “Communicate telemedicine” (communicate effectively using telemedicine portal) and “Telemedicine private” (telemedicine visit felt private and secure). “Communicate telemedicine” had a factor loading of 0.4971 in Factor 2 but was categorized under Communication and Autonomy for conceptual reasons. “Telemedicine private” loaded under the cutoff but was retained and categorized under Responsive and Supportive care on conceptual bases. presents standardized alphas and scale descriptive statistics for the full US-PCAC scale and subscales, standardized to a 100-point scale. For the full sample, the standardized alpha for the 29-item PCAC scale was 0.95 (mean score = 87.86, SD = 15.03; Range 20.69–100). For the in-person sample, the standardized alpha for the 30-item PCAC scale was also 0.95 (mean score = 85.94, SD = 16.14; Range 22.22–100). Due to the small sample of telemedicine participants, the standardized alpha was unable to be calculated for the 31-item US-PCAC scale. The unstandardized alpha for the telemedicine 31-item scale was 0.86 (mean score = 94.24, SD = 7.22; Range = 61.29–100). Criterion validity In bivariate results, among the full sample, each one-unit increase in total standardized US-PCAC score was associated with a 1.11 times (95% CI: 1.07, 1.15) higher odds of satisfaction . All PCAC subscales were also positively associated with satisfaction: Respect and Dignity (OR = 1.06, 95% CI: 1.04, 1.09), Responsive and Supportive Care (OR = 1.06, 95% CI: 1.03, 1.09), and Communication and Autonomy (OR = 1.12, 95% CI: 1.08, 1.17). For the in-person subsample, participants who reported greater satisfaction with the entire process had significantly higher US-PCAC total and subscale scores. Discussion This study is significant in that it is the first validated quality of care scale for abortion in the US and highlights three dimensions of care: respect and dignity, communication and autonomy, and responsive and supportive care. Our study found high construct, content, criterion validity and reliability for the PCAC scale in a US setting for both in-person and telemedicine medication abortion care. Given evidence of improved clinical and patient outcomes associated with patient-centered care , the US-PCAC provides a much-needed, standardized tool that may aid monitoring and research efforts. The US-PCAC scale adds to a set of validated person-centered care scales for reproductive health that include scales for abortion , family planning , prenatal , and intrapartum care . While there are other scales that measure person-centered contraceptive care (see ), having a standardized set of measures across the continuum of sexual and reproductive healthcare allows for comparisons across contexts and health services. Across several studies, the communication and autonomy domain consistently has the lowest scores, suggesting that a focus on ensuring comprehension of medical procedures and patient involvement in shared decision-making may be necessary. This study has several limitations. First, the small sample size, particularly for the telemedicine group, was not sufficiently robust to validate the scale for the telemedicine-specific sample. However, this study provides exploratory evidence of high construct validity and reliability for the overall scale for the telemedicine sample. Second, to recruit sufficient samples, we expanded our eligibility criteria to those who had in-person abortions in 2018 to present; thereby increasing the possibility for challenges in recalling specifics of their care. Third, this study was only offered in English, limiting our sample to English-proficient participants. Lastly, we recognize the limitations of the commonly-used global “satisfaction” measure to assess criterion validity, including that satisfaction is a product of expectations, such that people with low expectations may report higher satisfaction with poor care; moreover, abortion patients oftentimes report high satisfaction because of high stigma associated with abortion . However, given the lack of gold standard, we use satisfaction to measure the outcome of people’s experiences and recognize the need for future studies to examine person-centered care on other abortion outcomes. The US-PCAC is unique as it includes items specific to either in-person or telemedicine medication abortion. Person-centered care remains critically important given the expansion of telemedicine medication abortion services during COVID and in the post-Dobbs era . The tool will support broader monitoring and research efforts by establishing guidelines for person-centered abortion quality indicators in the US setting. Given differences in quality of abortion care by setting and patient characteristics, the tool can be used to assess health inequities in person-centered abortion care . Future studies may refine and shorten the number of items as performance metrics for quality improvement efforts in clinic settings in order to provide actionable recommendations to healthcare providers and systems. Appendix A |
Forensic medical evaluation of penetrating abdominal injuries | 99977f92-1f22-4f34-bbf5-6f12b376e8e7 | 11372487 | Forensic Medicine[mh] | The frequency of firearms and sharp weapon use, commonly encountered in cases of violence, is alarming. The increase in individual armament and the ease of access to unlicensed weapons contribute to these violent incidents. Sharp objects are the most frequently encountered weapons in violence and injury incidents due to their availability in homes and workplaces, their widespread sale, affordability, and the lack of sanctions for carrying them if they do not meet legal specifications. Articles 86 and 87 of the Turkish Penal Code provide important details on the grading of injuries. These articles specify that injuries may have different legal consequences depending on their severity. Injuries treatable with simple medical interventions are considered the least serious injuries in the eyes of the law and usually refer to easily treatable conditions such as shallow cuts or minor bruises. However, more serious health conditions such as bone fractures, tendon damage, major blood vessel or nerve injuries, or internal organ damage, are not considered treatable with simple interventions. These types of injuries require more complex medical interventions and may result in more serious legal consequences. An injury that causes a life-threatening situation is classified as such when a person’s life is exposed to immediate danger following an injury but can be saved either by the individual’s own bodily resistance or by medical assistance. Importantly, a life-threatening situation must have occurred during the incident; death is not necessary. The fact that the person subsequently recovers, with or without treatment, does not alter this classification. When making a decision, the medical findings (the effect on the person) should be taken into account, rather than the magnitude, severity, or dangerousness of the event that caused the injury. Persistent impairment or loss of function of one of the senses or organs: For this condition to be recognized after the injury, the impairment of the function of one of the senses or organs must be permanent. In Article 86 of the Turkish Penal Code, if the offense of intentional injury is committed with a weapon, a more severe form of the crime occurs and the penalty is increased. Crimes of intentional injury are classified as crimes subject to complaint. However, in cases where the crime is committed against a superior, subordinate, spouse, or sibling, or against a person who cannot defend themselves physically or mentally, or is committed with a weapon, a lawsuit may be filed without a complaint. Article 6 of the Turkish Penal Code defines a weapon as any kind of cutting, piercing, or bruising tool made for use in attack and defense. In our study, we aimed to analyze the demographic characteristics of penetrating abdominal injuries, including the most common age range, the time periods during which the injuries occurred, and the effects of alcohol and substance use on such injuries. We also examined the extent of the injuries, the organs most commonly damaged, the mortality rate, and sought to contribute to the trauma data of our country. The research aims to contribute to more effective management of injury cases by addressing the challenges in forensic medicine practice. It also aims to provide an important reference point for the development of injury prevention and intervention strategies by exploring the social dimensions of such injuries and the legal framework in response to them, providing foundational information for the development of relevant legal and health policies. In our study, we retrospectively reviewed the hospital archives and forensic reports of 28,619 cases who were admitted to the Emergency Department of Kütahya Evliya Çelebi Hospital over a five-year period from January 1, 2016 to December 31, 2020, with the approval of the ethics committee. All cases with penetrating abdominal injuries were included in the study. A total of 85 cases were analyzed. Out of the 28,619 cases screened, 85 (0.29%) were included in the study. After examining the forensic reports of the cases, data were obtained by reviewing the past medical histories of the patients from the hospital’s information management system. The data obtained from the examination were evaluated for demographic characteristics, time of the incident, type of incident, and site and degree of injury using a statistical program. Statistical Analysis The data obtained in the study were analyzed using the IBM SPSS (Statistical Package for the Social Sciences) Statistics 22 program. For quantitative data, descriptive statistics such as mean, standard deviation, median, and maximum-minimum value were used. For qualitative data, frequency tables including frequency and percentage values were utilized. Double or triple cross-tables and chi-square tests were employed to examine the relationship between variables. Cramer’s V was used to calculate the degree and direction of the relationship between the categorical variables. To determine if there was a statistically significant difference between two independent groups regarding a numerical variable, the data were first tested for normal distribution using the Kolmogorov-Smirnov and Shapiro-Wilk tests. As the data did not conform to normal distribution, the Mann-Whitney U test, a non-parametric test, was applied. Column graphs were created. For statistical significance, a 0.05 margin of error and a 0.95 confidence level were set, and the results obtained were statistically significant. Ethics Approval for this study was obtained from the Non-Interventional Clinical Research Ethics Committee of the Rectorate of Kütahya Health Sciences University with decision number 2021/11-20 on June 30, 2021. Since our study was an analytical retrospective study, data were obtained from the hospital health data system. Utmost attention was paid to the privacy of the individuals’ identity information, and it was not shared with anyone outside the study team. Only health data relevant to the study were used; other data were not recorded. The data obtained in the study were analyzed using the IBM SPSS (Statistical Package for the Social Sciences) Statistics 22 program. For quantitative data, descriptive statistics such as mean, standard deviation, median, and maximum-minimum value were used. For qualitative data, frequency tables including frequency and percentage values were utilized. Double or triple cross-tables and chi-square tests were employed to examine the relationship between variables. Cramer’s V was used to calculate the degree and direction of the relationship between the categorical variables. To determine if there was a statistically significant difference between two independent groups regarding a numerical variable, the data were first tested for normal distribution using the Kolmogorov-Smirnov and Shapiro-Wilk tests. As the data did not conform to normal distribution, the Mann-Whitney U test, a non-parametric test, was applied. Column graphs were created. For statistical significance, a 0.05 margin of error and a 0.95 confidence level were set, and the results obtained were statistically significant. Approval for this study was obtained from the Non-Interventional Clinical Research Ethics Committee of the Rectorate of Kütahya Health Sciences University with decision number 2021/11-20 on June 30, 2021. Since our study was an analytical retrospective study, data were obtained from the hospital health data system. Utmost attention was paid to the privacy of the individuals’ identity information, and it was not shared with anyone outside the study team. Only health data relevant to the study were used; other data were not recorded. Of the 85 patients with penetrating injuries to the abdominal cavity, 74 (87.1%) were male and 11 (12.9%) were female. The mean age was 31.3±13 years, with the youngest being 12 years old and the oldest age 81 years old. The most common age range was 21-30 years (40%). The mean age for both sexes was again 31 years. When analyzing the time intervals in which the incidents occurred (dividing the day into three 8-hour periods), it was observed that while there was no difference among women, a notable concentration of incidents among men occurred during the evening and night hours. The most incidents were recorded between 20:00-04:00 hours, accounting for 64.9%, while the fewest occurred between 04:00-12:00 hours, accounting for 10.8%. When categorizing the locations of the incidents into urban centers and districts, it was found that 83.5% of the incidents occurred in the urban center, and 16.5% in districts. When the origins of the injuries were analyzed, it was found that 87.1% were caused by intentional injury, 5.9% by accidents, 5.9% by suicide, and 1.2% by animal (boar) attacks . When analyzing the distribution of origins by gender, it is observed that the rate of victims of intentional injury is the highest in both genders . When the distribution of the origins according to the time of day was analyzed, it was found that intentional injuries were most common, occurring at a rate of 66.2% between 20:00-04:00 hours . In four of the five suicide cases, it was found that a sharp instrument was used, one case involved a firearm, all of them resulted in anterior abdominal injuries, one case had no injury to the abdominal organs, one case involved a stomach injury, one case a liver injury, and two cases had intestinal injuries. Four patients had a single wound, and one patient had 11 wounds. It was found that five of the wounds of the patient diagnosed with psychosis, who injured himself in 11 places with a sharp instrument, penetrated into the abdominal cavity, one penetrated the pericardium, and there was also a liver laceration and left ventricular injury; he was operated on and discharged. According to the evaluation made based on the instrument used in the injury cases, it was found that the most common injuries were stab wounds with a rate of 69.4%, firearm injuries with a rate of 27.1%, and other causes (falls from a height, harvester accidents, and animal attacks) with a rate of 3.5% . Of the firearm injuries, 52% were gunshot bullet injuries, and 48% were shotgun pellet injuries. In all categories, the rate of stab wounds was higher than that of firearm injuries . Of the seven cases admitted as deceased, four were due to firearm injuries and three were due to stab wounds. While the majority of the total number of instruments used were for stabbing, firearm wounds were more common than stab wounds in cases that resulted in death. When we examine the instruments used according to gender, we observe a high rate of stabbing in both genders, while males have a higher rate of firearm injuries than females. However, this difference was not found to be statistically significant (p=0.43). In the forensic reports of 23 cases with firearm injuries, it was noted that localization was described in all cases, 11 of them had multiple wounds due to pellet injuries, eight of the 12 cases with gunshot wounds were described as having entry and exit wounds, and the nature of the wound was not mentioned in four of them. When analyzing the alcohol levels of the cases upon their arrival at the hospital after the incident, it was found that alcohol was detected in 36.5%, not detected in 30.6%, and not tested in 32.9%. Of the cases where alcohol was detected, the levels were between 0-50 mg/dL in 7.1%, between 50-100 mg/dL in 4.7%, and higher than 100 mg/dL in 24.7%. When alcohol values were analyzed according to gender, it was found that 72.7% of the women were not analyzed for alcohol, 18.2% were not detected alcohol and 9.1% were detected alcohol, whereas 40.5% of the men were detected alcohol, 32.4% were not detected alcohol and 27% were not analyzed for alcohol . The rates of alcohol analysis in male subjects were statistically significant compared to female subjects (p=0.002). When analyzing the alcohol values of the cases according to the time of admission to the hospital, it was observed that 30.8% of the cases admitted between 20:00-04:00 hours had an alcohol value higher than 100 mg/dL, while only 4.5% of the cases between 12:00-20:00 hours had such high alcohol values . When analyzing the presence of alcohol according to the origin of the injury, it was found that alcohol was detected in 39.2% of the cases of intentional injury, 32.4% of the cases where alcohol was not detected, and 28.4% of the cases where alcohol was not tested. Alcohol was detected in 40% of suicide cases. Alcohol was not tested in 80% of accident cases. It was determined that 48% of the cases in which alcohol was detected were between the ages of 21-30, and 29% were between the ages of 31-40. In our study, the impact of alcohol levels on injury severity was analyzed. The relationship between alcohol levels and the necessity for surgery was not statistically significant (p=0.698). Similarly, the relationship between alcohol levels and the length of hospital stay was not statistically significant (p=0.341). Additionally, the relationship between alcohol levels and the likelihood of being admitted as deceased was not statistically significant (p=0.906). When analyzed according to whether they underwent surgery by general surgery or not, 81% of the cases required surgery, 13% did not require surgery, and 6% died without undergoing surgery. When examining the organs damaged as a result of injuries penetrating the abdominal cavity, it was found that all abdominal organs were intact in 25.9% of the cases. Nearly half of the cases, 44.7% had a single organ injury, while 23.5% had damage to more than one organ . Including the cases involving multiple organ damage, the small intestine was the most frequently injured organ, affected in 23.7% of cases, followed by the liver at 18.9%, and the stomach at 13.1% . The gallbladder and pancreas were the least frequently injured organs, each affected in 3.6% of cases. Since 5.9% of patients died, there is no data on organ injuries in our hospital . When we analyzed for organ dysfunction or loss, we found that 72 (84.7%) patients experienced no loss or dysfunction of abdominal organs, 7 (8.2%) patients suffered intra-abdominal organ loss, and 6 (7.1%) patients died. Among the surgeries for organ loss, there were 2 splenectomies, 1 nephrectomy, 2 cholecystectomies, 1 combined splenectomy and distal pancreatectomy, and 1 combined splenectomy and nephrectomy. It was found that extra-abdominal organ loss (an eye) occurred in 1 case, which was not included in these rates. It was found that 6 of the cases resulting in organ loss were caused by stab wounds and 1 was caused by a firearm injury. When analyzing the origins of the cases that resulted in organ loss, it was observed that all were due to intentional injury crimes, and no organ loss occurred in cases with other origins. Looking at the number of wounds across the entire body, 45 (52.9%) had a single wound, 10 (11.8%) had 2 wounds, 10 (11.8%) had 3 wounds, and 25 (23.5%) had more than 4 wounds. The average number of wounds was 3.6. The average number of wounds from firearms was 5.8, and 2.7 from stab wounds. Since the distribution of shotgun pellet injuries was not described in detail, it was assumed that these injuries occurred from a single shotgun shot. While the median number of injuries was 1 in living patients, it was 5 in patients who died. There was no statistically significant difference in the number of injuries between the patients who died and those who did not (p=0.061). The number of wounds did not contribute to mortality. The rate of female patients with more than 3 wounds was 45%, while this rate was 20% in male patients. When analyzing the abdominal injuries according to the direction of penetration, it was found that 66 (77.6%) of the cases had anterior abdominal injuries, 11 (12.9%) had injuries penetrating the abdominal cavity from the posterior part of the body, and 8 (9.4%) had lateral injuries. Upon examining other injuries encountered in addition to those in the abdominal cavity, it was found that 47.1% of the cases had no extra-abdominal injury, 24.7% had lung injuries (pneumothorax, hemothorax), 36.4% had extremity injuries, 3.5% diaphragm injuries, 2.4% had heart injuries, and 1.2% had facial injuries . The mean duration of hospitalization was 9 days . The hospitalization duration ranged from 1 to 10 days for most patients, while it exceeded 20 days for a few patients. When the cases were evaluated for bone fractures, it was found that 75 (88.2%) had no bone fractures, 4 (4.7%) had rib fractures, 3 (3.5%) had fractures in facial and extremity bones, and 3 (3.5%) had no data on bone fractures because they were deceased upon arrival. When analyzing the origins of the cases with bone fractures, it was observed that all were due to intentional injury. It was determined that 70 (82.4%) of the cases had no arterial injury, 7 (8.2%) had intra-abdominal arterial injuries, 2 (2.4%) had extra-abdominal arterial injuries, and 6 (7.1%) had no data on vascular injuries because the patients died. It was found that 52 (61.2%) of the cases had intra-abdominal bleeding, 28 (32.9%) did not have intra-abdominal hemorrhage, and 5 (5.9%) had no data because they were deceased . When analyzing the mortality rates of injuries to the abdominal cavity, it was found that 78 (91.8%) of the cases were discharged, and 7 (8.2%) died in the hospital. It was determined that 6 of the cases who died had been in cardiac arrest before arriving at the hospital, were admitted to the emergency department accompanied by cardiopulmonary resuscitation (CPR) from the 112 team, did not regain respiration or circulation, and were declared deceased, while 1 case was declared deceased after the first 24 hours. When examining the origins of the cases admitted as deceased, it was understood that all were of intentional injury origin. The mean age of the cases admitted as deceased was 29.6 years. In our study, the abdominal injuries admitted to Kütahya Evliya Çelebi Hospital over a 5-year period were analyzed. It was found that most victims of such injuries were around 30 years old, predominantly male, and victims of violence. It was observed that the most common time for admission was at night and that the injuries were mostly inflicted with sharp instruments. While the majority of injuries were caused by sharp instruments, firearm injuries were more common in cases resulting in death. The fact that young men are at higher risk is often attributed to social and psychological factors, where risky behaviors are more common. Alcohol and substance use may be more prevalent, and tendencies toward conflict or violence may be higher. It was noted that the nature of the wound was not mentioned in four cases. Errors and omissions in forensic reports can frequently occur in emergency departments. In forensic reporting, accurate localization and description of wounds, and identification of entry and exit wounds in gunshot cases are crucial for the forensic process. Conclusions about the crime tool used can be made by evaluating the findings on the skin from knives, which are frequently used in cutting and piercing injuries. In some incidents, injuries may involve more than one defendant and knife. It is very important in forensic reporting to determine whether the injury on the person’s body has a skin-subcutaneous course, affects deep soft tissues (muscle and fascia), crosses the peritoneum, and/or causes internal organ injury. These factors are important determinants in the severity of punishment received by the defendant. Detailed descriptions of surgical interventions, operation notes, and the lesions observed on the person’s body before the first intervention are critical for guiding forensic medicine practices. When analyzing the alcohol levels of the cases upon their arrival at the hospital after the incident, it was found that alcohol was detected in 36.5%, not detected in 30.6%, and not tested in 32.9%. In the study of Altun et al. on sharp object injuries in living subjects, 39% of the subjects were found to be alcoholic, 32% non-alcoholic, and 29% had no alcohol information. In a study by Bilgin et al. on forensic autopsy cases involving stab wounds, alcohol was detected in 34.6% of the cases, and narcotic-drug substances were detected in 4.7%. We believe that examining substance use in addition to alcohol analysis in cases of suicide and violence-oriented incidents will be useful in clarifying the forensic process and determining the underlying causes. It was observed that alcohol tests were requested less frequently for female cases. Considering that these cases are forensic in nature, and that the use of alcohol and drugs is also important in the follow-up and treatment of penetrating abdominal trauma, it is necessary to perform these analyses in all forensic cases. Alcohol was not tested in 80% of the accident cases. In emergency conditions, it was observed that the rate of requesting alcohol tests varied according to the type of injury. When examining the results of alcohol levels on injury severity in our study, the relationship between alcohol levels and injury severity (surgery, hospitalization time, and emergency admissions) was not statistically significant. Göksu et al. found that blood ethanol level did not affect the duration of hospitalization or the mortality rate in a study conducted on patients admitted to the hospital emergency department due to traffic accidents. Afshar et al. investigated the relationship between alcohol and injury and death in trauma patients and reported that the mortality rate was highest in the group with moderate blood alcohol concentration, and lowest in the group with very high blood alcohol concentration. When analyzed according to whether the patients underwent surgery by general surgery or not, 81% of the cases required surgical intervention, and the organ most frequently injured was the small intestine, affected in 23.7% of cases. In a study conducted by Badak et al. on abdominal sharps injuries, injuries were reported as follows: 28% to the small intestine, 14.6% to the spleen, 12.1% to the liver, 10.9% to the colon, and 7.3% to the stomach. When analyzing organ dysfunction or loss, we found that 84.7% had no loss or dysfunction of abdominal organs, and the most common organ loss was the spleen. It was also observed that the most common surgical procedure performed for blunt abdominal trauma was a splenectomy. In the Turkish Penal Code, the crime of intentional injury under crimes against bodily inviolability is defined in Article 86, and the crime of injury aggravated by consequence is defined in Article 87. Paragraph 2b of Article 87 defines the crime of aggravated wounding, where the loss of function of one of the senses or organs constitutes the qualified form of the crime and results in an increase in the punishment received by the offender. In this context, the loss of organ function in penetrating abdominal injuries is significant. In our study, 58 patients had organ injuries, and 7 patients experienced intra-abdominal organ loss. The average number of wounds was 3.6, with an average of 5.8 wounds in firearm injuries and 2.7 in sharp object injuries. The higher number of wounds from firearms may be attributed to the potential for both entry and exit wounds, which increases the total count. Additionally, the higher number of wounds could be due to the ease of shooting, as no interpersonal struggle is required and the distance between individuals is greater with firearms than with sharp objects. In the study by Altun et al., 53% of the cases had a single injury, 22.7% had 2, 10.9% had 3, 13.3% had 4 or more lesions. In Derkuş’s study, it was observed that 54.6% of the cases had 1 injury, 18.3% had 2 injuries, 11.2% had 3 injuries, and 15.9% had more than 3 injuries. While the mean number of injuries in living patients was 3.4, the mean number of injuries in deceased patients was 5.4. There was no statistically significant difference in the mean number of wounds between deceased and non-deceased patients (p>0.05). It was found that the number of wounds did not contribute to mortality. In Uysal’s study, it was also found that the number of injuries did not contribute to mortality (p>0.05). When analyzing the abdominal injuries according to the direction of penetration, it was seen that 77.6% of the cases had anterior abdominal injuries. In the study by Kurt et al. on sharp penetrating injuries to the abdomen, it was found that 7.7% of the cases penetrated the abdominal cavity from the posterior and flank, while 92.2% of the cases had penetration in the anterior abdominal cavity. When we examined the other injuries encountered in the body in addition to injuries to the abdominal cavity, we found that 47.1% of the cases had no extra-abdominal injuries, 24.7% had lung injuries (pneumothorax, hemothorax), 36.4% had extremity injuries, 3.5% had diaphragm injuries, 2.4% had heart injuries, and 1.2% had facial injuries. In Uysal’s study, 28.1% of the cases had extremity injuries and 10.2% had head and neck injuries. Muratoğlu’s study on deaths due to penetrating injuries found that 12.5% had thoracic injuries, 7.7% abdominal injuries, 5.2% extremity injuries, and 35.4% injuries in more than one region. In Polat’s study on blunt and penetrating abdominal injuries, 25% had thoracic and 25% had extremity injuries. It was found that 61.2% of the cases had intra-abdominal bleeding, 32.9% did not have intra-abdominal bleeding, and 5.9% had no data because they died. In Taçyıldız’s study on penetrating abdominal traumas, intra-abdominal hemorrhage exceeding 1000cc was found in 59.5% of the cases. The mean age of the patients who were admitted as deceased was 29.6 years. In Taçyıldız’s study, the mean age of deceased patients in cases of penetrating abdominal trauma was 31.2 years. All our cases involved life-threatening injuries, as they all were patients with injuries to the abdominal cavity. The absence of death and recovery does not change this situation in legal terms. In the forensic traumatological evaluation of all cases, it was observed that the effect of the injury on the person was ’not mild enough to be resolved by simple medical intervention.’ Likewise, cases that do not require surgery or organ damage do not change this situation. Attention should be paid to these issues in forensic reporting. Forensic medicine experts may be expected by the courts to determine as experts whether the wounds in persons injured with a sharp instrument were self-inflicted or caused by another person during a struggle. Forensic medicine reports are crucial for distinguishing between the crime of attempted intentional homicide and the crime of intentional injury. In cases of intentional killing, where the result can be separated from the act, if the perpetrator could not complete the executive acts of the crime he started due to reasons beyond his control (i.e., if the victim did not die), the crime is considered intentional killing. At this point, it is important to differentiate between attempted intentional homicide and intentional injury. The determination of attempted intentional killing or intentional wounding is made by considering factors such as the targeted body area, the number and severity of the blows, the nature of the wounds, whether the act ended spontaneously or due to an obstacle, and the perpetrator’s behavior towards the deceased or the victim after the incident. The localization of the wounds, their characteristics, severity, and number are important in this context. Therefore, wounds should be accurately described in forensic reporting. Our case involving a patient diagnosed with psychosis who injured himself with a sharp instrument in 11 places—5 of which penetrated the abdominal cavity and one penetrated the pericardium, resulting in liver laceration and left ventricular injury—illustrates how seriously a person can injure himself. Suicidal behavior is a significant psychiatric issue often seen in mental disorders. Compliance of the person with mental disorders with treatment may be impaired. This may also necessitate inpatient treatment depending on the patient’s clinical condition. In this context, an important issue in the inpatient treatment of psychiatric patients is consent. Article 432 of Civil Code No. 4271 stipulates that freedom can be restricted for protection purposes. Under this legal regulation, individuals with mental illness, mental impairments, alcohol or drug addiction can be hospitalized for treatment against their will, following a medical board report, when there is a risk of harm to themselves or others. Everyone has the right to report such situations to the authorities. Injuries to the abdominal cavity are among the most common types encountered in emergency departments and are frequently reported in forensic medicine. These injuries are considered life-threatening due to their penetration into the abdominal cavity. In our study, we analyzed demographic characteristics, times of injury, types of injuries, and their outcomes. Penetrating injuries to the abdominal cavity were most commonly inflicted with sharp instruments and, secondarily, with firearms, and were typically related to violent incidents. The majority of the cases involved young adult males, and the incidents predominantly occurred during the night hours. The rate of alcohol consumption was found to be high. There was a tendency to request fewer alcohol tests in emergency services during first encounters, in cases involving females, and in non-violent cases. It was observed that half of the cases received a single injury blow, and the majority of the injuries were to the front of the body. Most cases required surgical intervention. The organs most frequently damaged were the small intestine and liver, with the spleen being the most commonly lost organ. Bone fractures and arterial injuries were less common. The mean duration of hospitalization was 9 days, and the mortality rate for injuries to the abdominal cavity was 8.2%. However, 6 of the 7 patients who died from penetrating abdominal injuries were admitted as deceased cases, and one patient, known to have sustained a splenic injury, died 9 days after admission. Penetrating abdominal injuries require careful evaluation and meticulous planning for surgical intervention. Optimizing surgical interventions is critical both for protecting patient health and for achieving the best possible outcomes. At this point, triage and evaluation, patient-specific planning, a minimally invasive approach, a multidisciplinary approach, emergency preparedness, adequate blood and blood product supply, and postoperative follow-up are important. In each case, the most appropriate intervention method should be determined by considering the specific situation and needs. Alcohol and substance abuse are more common in forensic traumatic cases than in the general population. The severity of the injury may cause life-threatening internal organ or vascular injuries. Substance and alcohol use may complicate the interpretation of the clinical picture and the management of the case. In our study, it was observed that substance analysis was not requested in the cases, and alcohol testing was predominantly performed in male cases. In forensic traumatic cases, it would be useful to request both alcohol and drug tests to clarify the clinical process and enhance the accuracy of forensic reporting. Various factors affect the length of hospital stay. These factors can range from the general health status of the patient to the severity of the injury, the patient’s age and comorbidities, treatment methods, presence of complications, quality of postoperative care, and social and psychological factors. Detailed analysis of data collected in emergency departments allows for a better understanding of trauma cases and the identification of risk factors. These data can contribute to the development of forensic and public health policies. For research, detailed epidemiologic studies are recommended to understand the demographic distribution of trauma-related deaths and injuries. Forensic evaluation of traumatized cases is particularly important in identifying cases of violence and abuse. The forensic medical examination processes of such cases should be integrated into emergency department protocols. Collaboration with public health agencies can help prevent a wide range of trauma-related health problems. These collaborations can develop early intervention strategies for chronic health issues and psychological problems that may develop as a result of trauma. National policies and regulations should be developed for trauma care in emergency departments, and the necessary resources should be provided for the implementation of these policies. |
Health literacy and non‐communicable disease knowledge of pregnant women and mothers in | d3e5b5d2-ef4b-4a30-b51a-98ed2f6bb076 | 11730750 | Health Literacy[mh] | INTRODUCTION The Shanghai Declaration on promoting health in the 2030 agenda for sustainable development is steering health promotion and the public health agenda globally. The declaration identified health literacy as a critical determinant of health and urged global investment to strengthen the health literacy of individuals and communities to enable them to make informed decisions to improve their health. Following the declaration, the World Health Organisation (WHO) prioritised health literacy development to reduce the growing burden of non‐communicable diseases (NCDs) globally. There is no universally accepted definition of health literacy. However, the WHO defines health literacy as ‘the personal characteristics and social resources that influence the ability of individuals and communities to access, understand, appraise, remember, apply and use information, knowledge and services to make decisions to promote health and sustain healthy behaviour’. Health literacy is a multifaceted concept which emphasises the importance of addressing local context and social and cultural knowledge and practices to improve health outcomes for all and reduce health inequalities. Health literacy has evolved from being considered an individual's asset to being acknowledged as a broader concept of health literacy responsiveness . Broadly, health literacy responsiveness is the extent to which the health systems, services and organisations respond to the health needs of individuals and communities irrespective of their personal health literacy assets. Thus, health literacy development throughout the life course at individual, community, health system and policy levels can play a vital role in accelerating the prevention, control and management of NCDs globally. NCDs are the leading cause of mortality, accounting for 74% of all deaths globally. The majority of the NCDs burden (80%) is attributed to five major NCDs: cardiovascular diseases, diabetes, cancers, chronic respiratory diseases, and mental health conditions. , Similarly in Australia, NCDs are responsible for 89% of all deaths. The distribution and prevalence of NCDs vary based on the remoteness of areas, with a higher prevalence of five major NCDs in regional Australia (52.2%) than in the major cities of Australia (40.8%). Tasmania is an island state in Australia, most of which is classified as regional. The prevalence of the five major NCDs is higher in Tasmania (51.8%) than the national Australian average (45.3%). Thus, more effort is required in this region to curb the growing NCD burden. NCDs are complex, evolve over decades and as illustrated in Figure are linked with common modifiable behavioural risk factors (80%), metabolic risk factors and social determinants of health. Most NCDs can be prevented by addressing the common modifiable behavioural risk factors and social determinants of health earlier in life. , Interestingly, recent evidence suggests a strong link between maternal health and NCDs and their associated risk factors. Women's health status and lifestyle before, during and after pregnancy can influence their own risk as well as their children's risk of developing NCDs in the future. Thus, pregnancy and early motherhood provide an opportunity to achieve optimal health and health behaviours for women and provides a logical window to address the NCDs risk earlier in the life‐course for their children. , Empowering women through health literacy development and enhancing their access to supportive environments may enable women to engage in healthy lifestyle practices before, during and after pregnancy and could play a crucial role in reducing the intergenerational impact of NCDs. However, to date there has been limited research and poor awareness of the importance of addressing the health literacy needs of pregnant women and mothers with young children in Australia and globally. , In addition, many existing interventions lack codesign principles and thus are at risk of failing to engage the end user (women) in the development of solutions to support them to engage healthy lifestyle practices. In recognition of the significant burden of NCDs in Tasmania and the lack of information about the health literacy needs of pregnant women and mothers, we conducted a cross‐sectional survey using the Health Literacy Questionnaire (HLQ). In total 194 women completed the HLQ survey of which 73.2% were married, 16.5% were pregnant, 81.4% were university educated and 36% had a chronic health condition (s). The study found that the participating women experienced diverse health literacy strengths and challenges, with mean scores varying across the nine HLQ scales. Further, the women who were not married, had one or more children, were not pregnant, and had chronic health condition(s) faced more significant health literacy challenges. To enhance our understanding of the health literacy strengths and challenges that pregnant women and mothers in Tasmania experience we conducted this exploratory qualitative study. A deeper understanding of the health literacy needs of this priority population will support efforts to codesign locally relevant, health literacy responsive, and gender‐responsive solutions to empower and support women to engage in healthy lifestyle practices. The research questions guiding the research were: What health literacy strengths and challenges do pregnant women and mothers with young children (0–8 years) experience? For pregnant women and mothers with young children: What is their knowledge and beliefs about the impact of NCDs and associated risk factors on their health and their child(s) health? METHODS This qualitative study was carried out in Tasmania, Australia. This research is the second phase of a larger research project that aims to codesign health literacy solutions with pregnant women and mothers with young children (0–8 years) in Tasmania. This exploratory qualitative study is expected to complement the quantitative findings from the HLQ survey (Phase 1) and will provide rich insight into the health literacy needs of the target population using the cluster analysis and will support the development of data informed vignettes. The vignettes will be used during the next phase of this study (codesign workshops) to communicate the health literacy and broader needs of women to various stakeholders in Tasmania to generate locally relevant solutions to respond to the identified needs. The project received ethics approval from the Tasmania Health and Medical Human Research Ethics Committee (Ethics approval number H0023036). All participants were required to read an information sheet and give electronic and verbal consent prior to admission to the interview. 2.1 Philosophical worldview Consistent with the qualitative study design, a constructivist philosophical worldview was used to interpret the meaning of varied insights of the study population. A constructivist worldview acknowledges that knowledge, beliefs, and meanings are socially constructed and are influenced by the social, cultural and historical norms in which the individual lives. Thus, the main focus was to understand the participants' viewpoint and to understand the specific context in which a person lives and thus inform the codesign of context‐specific solutions capable of engaging women in healthy lifestyle practices. 2.2 Study participants and recruitment Pregnant women and/or mothers with young children (0–8 years) living in Tasmania were recruited using a purposive sampling technique. Participants were a subset of participants who completed an HLQ survey (Phase 1) of the study recruited using a convenience sampling approach. A purposive sampling technique was used from this HLQ sample to recruit women with differing marital status (married, not married, and de‐facto relationship), pregnancy status (pregnant and not pregnant) and history of chronic conditions. At the end of the HLQ survey in Phase 1, all participants were redirected to a separate page and were invited to express their interest and leave their details (name and preferred contact details) if they would like to be contacted by the research team to participate in an interview. The research team contacted all women who expressed their interest to participate in the interviews and shared the information sheet that included details about the research team, the interviewee and the study's aims and objectives. Participant consent was undertaken prior to the interview and there was opportunity for questions about the information provided. 2.3 Data collection Data were collected using one‐on‐one in‐depth semi‐structured interviews. The interviews were conducted online via video conferencing software Zoom. The interview guide was informed by a scoping review of the literature and the findings from the quantitative survey. The interview guide (see Supplementary ) comprised of 13 open ended questions that helped to generate detailed responses from the participants. The key areas explored in the interviews included but were not limited to knowledge and beliefs about NCDs and associated risk factors, barriers to achieving healthy lifestyle practices, avenues for accessing information about health and healthy lifestyle, and experiences of accessing health and other support services. All interviews were conducted by the primary author who is a male researcher from India experienced in undertaking mixed‐method research. He is a trained health professional (Dentist), with a Master's in Public Health. His specific interest in maternal health and health literacy evolved from an internship at the WHO in Geneva. The duration of the interviews was between 20 and 52 min. All interviews were audio recorded after obtaining written and verbal consent from each participant. The data collection was stopped once data saturation was achieved. 2.4 Data analysis The interview data (audio recordings) were transcribed verbatim using the transcription software ‘Otter’. At this stage, each participant was deidentified and was allocated a pseudonym. To ensure the accuracy of the transcripts, they were member checked by the primary author and the willing participants. The interview transcripts were analysed using the qualitative data analysis software NVivo. The data were analysed using reflexive thematic analysis as it provides a flexible interpretive approach to analyse qualitative data. , The analysis process was mainly inductive; however, some of the theory‐driven (deductive) aspects were inevitable due to the nature of the data collected (the interview guide was informed by the findings from the literature review and the quantitative survey) and due to the researcher's prior knowledge and experience around the topic. , As outlined by Braun et al., six phases of reflexive thematic analysis were used during the analysis process: data familiarisation; generating initial codes; constructing themes; revising and defining themes; and writing the report. The data familiarisation was achieved during the transcription of the interviews, listening to the audio recordings and re‐reading of the interview transcripts. The notes (highlighting the necessary information in each transcript) were created using the ‘annotation’ feature in the NVivo software. Upon further exploration of the data, initial codes were generated using the ‘codes’ feature. The semantic and latent codes were captured due to the nature of the data. Following this, the codes were used as building blocks and were grouped and refined to generate initial themes. The initial themes were discussed with another author (RN) and were reviewed in collaboration until the themes captured the most relevant features and addressed different elements of the research question. , The six phases of reflexive thematic analysis helped to ensure the credibility of the study findings and the collaborative refinement of the initially generated themes helped to ensure the trustworthiness and integrity of the data analysis process. Philosophical worldview Consistent with the qualitative study design, a constructivist philosophical worldview was used to interpret the meaning of varied insights of the study population. A constructivist worldview acknowledges that knowledge, beliefs, and meanings are socially constructed and are influenced by the social, cultural and historical norms in which the individual lives. Thus, the main focus was to understand the participants' viewpoint and to understand the specific context in which a person lives and thus inform the codesign of context‐specific solutions capable of engaging women in healthy lifestyle practices. Study participants and recruitment Pregnant women and/or mothers with young children (0–8 years) living in Tasmania were recruited using a purposive sampling technique. Participants were a subset of participants who completed an HLQ survey (Phase 1) of the study recruited using a convenience sampling approach. A purposive sampling technique was used from this HLQ sample to recruit women with differing marital status (married, not married, and de‐facto relationship), pregnancy status (pregnant and not pregnant) and history of chronic conditions. At the end of the HLQ survey in Phase 1, all participants were redirected to a separate page and were invited to express their interest and leave their details (name and preferred contact details) if they would like to be contacted by the research team to participate in an interview. The research team contacted all women who expressed their interest to participate in the interviews and shared the information sheet that included details about the research team, the interviewee and the study's aims and objectives. Participant consent was undertaken prior to the interview and there was opportunity for questions about the information provided. Data collection Data were collected using one‐on‐one in‐depth semi‐structured interviews. The interviews were conducted online via video conferencing software Zoom. The interview guide was informed by a scoping review of the literature and the findings from the quantitative survey. The interview guide (see Supplementary ) comprised of 13 open ended questions that helped to generate detailed responses from the participants. The key areas explored in the interviews included but were not limited to knowledge and beliefs about NCDs and associated risk factors, barriers to achieving healthy lifestyle practices, avenues for accessing information about health and healthy lifestyle, and experiences of accessing health and other support services. All interviews were conducted by the primary author who is a male researcher from India experienced in undertaking mixed‐method research. He is a trained health professional (Dentist), with a Master's in Public Health. His specific interest in maternal health and health literacy evolved from an internship at the WHO in Geneva. The duration of the interviews was between 20 and 52 min. All interviews were audio recorded after obtaining written and verbal consent from each participant. The data collection was stopped once data saturation was achieved. Data analysis The interview data (audio recordings) were transcribed verbatim using the transcription software ‘Otter’. At this stage, each participant was deidentified and was allocated a pseudonym. To ensure the accuracy of the transcripts, they were member checked by the primary author and the willing participants. The interview transcripts were analysed using the qualitative data analysis software NVivo. The data were analysed using reflexive thematic analysis as it provides a flexible interpretive approach to analyse qualitative data. , The analysis process was mainly inductive; however, some of the theory‐driven (deductive) aspects were inevitable due to the nature of the data collected (the interview guide was informed by the findings from the literature review and the quantitative survey) and due to the researcher's prior knowledge and experience around the topic. , As outlined by Braun et al., six phases of reflexive thematic analysis were used during the analysis process: data familiarisation; generating initial codes; constructing themes; revising and defining themes; and writing the report. The data familiarisation was achieved during the transcription of the interviews, listening to the audio recordings and re‐reading of the interview transcripts. The notes (highlighting the necessary information in each transcript) were created using the ‘annotation’ feature in the NVivo software. Upon further exploration of the data, initial codes were generated using the ‘codes’ feature. The semantic and latent codes were captured due to the nature of the data. Following this, the codes were used as building blocks and were grouped and refined to generate initial themes. The initial themes were discussed with another author (RN) and were reviewed in collaboration until the themes captured the most relevant features and addressed different elements of the research question. , The six phases of reflexive thematic analysis helped to ensure the credibility of the study findings and the collaborative refinement of the initially generated themes helped to ensure the trustworthiness and integrity of the data analysis process. RESULTS Twenty women with a mean age of 35.5 years (standard deviation 5.13) participated in the interviews. The demographic characteristics of the participants are shown in Table . Four parent themes pertaining to the research questions were generated from the data: Perceived knowledge and awareness of NCDs and their causative factors Social determinants of health and the surrounding environment Social networks and support system Trust in health services and social connections The themes (and their description), subthemes and example quotes related to each subtheme are provided in Table . The health literacy strengths and challenges were related to and influenced by the parent themes. The parent themes were observed to be overlapping and interconnected as shown in Figure . Each theme will now be reported in more detail with the support of illustrative quotes (see Table ). 3.1 Perceived knowledge and awareness of NCDs and their causative factors The participants described NCDs as long‐term conditions that cannot be cured and require long‐term and lifelong management. The women demonstrated varied knowledge and awareness of the various NCDs and associated risk factors, which was influenced by their profession, personal history of NCDs or history of NCDs in their family or social circles. Eight of the 20 participating women had a personal history of NCDs, whilst others had a history of NCDs in their family or social circle. I guess I probably have a limited understanding, if I'm honest, I understand that chronic conditions are ones that are long term, so don't go away as opposed to kind of acute conditions, which I guess, short term ones. I didn't know a lot about diabetes until I was diagnosed with gestational diabetes. So I've learned a little bit about that condition. [ID 12] I'm generally well versed in chronic conditions that run in my family, both my parents have high blood pressure, my mom's had a number of illnesses over the years, surely cancer, which runs in our family as well. We also have mental health issues in my family. So, my mum has depression, I have depression and anxiety, which I'm just medicated for. [ID 15] The participants' beliefs and knowledge of the various factors that may increase the risk of developing NCDs for themselves and their children could be grouped into five subthemes: behavioural/lifestyle or modifiable risk factors, genetic or inevitable risk factors, social determinants of health, environmental factors and mental health. 3.1.1 Behavioural/lifestyle or modifiable factors Participants considered behavioural factors such as unhealthy diet, sedentary lifestyle, tobacco, alcohol and substance use, and poor sleep patterns to all be associated with an increased risk of developing NCDs. However, most participants considered unhealthy diet as the key risk factor for NCDs. 3.1.2 Genetic or inevitable risk factors Some participants believed that some NCDs are genetic or inevitable, and thus it can be challenging to prevent them. In addition, they reported that it can be difficult to understand the extent to which the genetic component plays a role in the development of NCDs such as cancers. 3.1.3 Social determinants of health The participants also described how the social determinants of health and the surrounding environment (such as adequate housing, education, socio‐economic status, rurality, access to health services, access to safe physical activity places and availability of healthy food items) could influence the risk of developing NCDs for themselves and their children. 3.1.4 Environmental factors Environmental factors such as air pollution, exposure to wood/coal smoke and harmful chemicals (lead and asbestos) were also recognised as important risk factors for NCDs such as asthma. 3.1.5 Mental health Mental health was considered both a risk factor for NCDs and separate to NCDs. The participants mentioned that mental health issues such as depression and anxiety increase the likelihood of engaging in unhealthy lifestyle practices such as unhealthy eating, alcohol and substance use and physical inactivity and thus would increase the risk of developing NCDs. 3.2 Social determinants of health and the surrounding environment Participants described the social determinants of health, and how their surrounding environment influenced their access to health information and services and their engagement in healthy lifestyle practices for active health management. The determinants and factors are grouped into five subthemes. 3.2.1 Upstream factors The participants considered several upstream factors (factors considered to be out of the control of the women) which are crucial in facilitating timely access to health information and health services. These factors mainly included having a university education or health care background which resulted in the possession of the critical appraisal and research skills necessary to access and interpret credible health information. Participants also recognised that having a stable income resulted in safe housing, good access to healthy nutrition and a supportive environment, and access to private health insurance which in turn resulted in timely access to quality health services for women and their children. However, the differences between private and public health care, such as differences in the wait times and quality of health services, acted as a crucial challenge for participants and compounded their perception that health inequities exist in Tasmania. Participants also acknowledged that upstream factors such as social determinants of health (low education level, high cost of living, high cost of healthy food items, transport, rurality and lack of enabling environment); overload of information or misinformation; environmental factors (harsh climate and unfavourable weather conditions); and commercial determinants of health (unethical marketing and advertisement strategies by fast‐food companies) impact their access to health information, utilisation of health services and engagement in healthy lifestyle practices. 3.2.2 Healthy family environment and parental role modelling Participants believed that a supportive family environment and parental modelling were critical in enabling them and their children to adopt healthy lifestyle practices. Women suggested that a parents' lifestyle can have an influence on their children's lifestyles and their engagement in healthy lifestyle practices. 3.2.3 Lack of health literacy responsive environment Participants experienced a lack of responsiveness to their health needs from health services in Tasmania which they perceived impacted their access to health information and health services. This was due to the high cost of health services, lack of availability and long wait times associated with gaining access to specialist health services. A perceived lack of information about available health services and their utilisation, non‐uniformity of the health system within Tasmania and Australia, lack of routine follow up from the health services and lack of children friendly physical activity spaces were also raised as concerns by the participants. In addition, the lack of locally relevant information about what to expect during and after pregnancy, which health services women and their children can access, and what to feed their children acted as additional challenges for women. Further, women also experienced a lack of awareness about NCDs specific to pregnancy, such as gestational diabetes and postnatal depression and their future impact these conditions may have on the health of women and their children. 3.2.4 Self‐efficacy of women to engage in healthy lifestyle practices The participants reported that a woman's self‐efficacy was essential to them for engaging in healthy lifestyle practices and actively managing health for themselves, their families and their children. The self‐efficacy of participating women was influenced by their education level and stable income. Women who were aware of the future risk of developing NCDs due to family or personal history were mindful of making healthy lifestyle choices (avoiding NCDs risk factors and ensuring timely access to preventive health services). In addition, many of the participants encouraged home‐cooked meals and engaged their children in cooking to promote healthy eating from the early stages of life. 3.2.5 Internet and digital environment The internet (websites, social media and peer‐reviewed journal articles) was the primary and the major source of health information referred to by our study participants. The websites ranged from Australian or State Government Department of health websites, health organisations websites (Raising Children Network, Healthline direct and Families Tasmania), university websites, and hospital websites (Royal Children Hospital Melbourne or Sydney, and Women hospitals in Melbourne or Sydney). Whilst on social media, women mainly accessed information from mothers' groups or social media accounts of health organisations or health professionals. The websites managed by government and health organisations were considered reliable and trustworthy sources, whilst personal opinions, chats or blogs on social media were considered non‐reliable sources of information. In addition, the participants reported that the Raising Children Network was the source they used most to access health information for their children and was considered reliable by the study participants. 3.3 Social networks, family connections and peer support as health navigator The study participants recognised that their family and social connections and their support networks were crucial to managing their health. The influence of the peer, family and social support system on the health of study participants is described using the following subthemes. 3.3.1 Easy access to health information and services Family members and social connections (social circle or mother groups) were important sources of information for study participants. Family and social connections provided women with their primary source of information on which health services to access. Women considered having a partner, family member or friend as a health professional or in the health system a privilege. Further, having a family member or a friend practising in the health sector was considered an enabler in accessing health information and services promptly, as it provided them with insider access to the health system. 3.3.2 Ensuring good mental health and wellbeing for women and their children The participants considered good mental health and wellbeing as crucial factors in their ability to engage in healthy lifestyle practices. Women described that establishing meaningful family and social connections were essential to ensuring good mental health and wellbeing for themselves and their children. Other factors which positively contributed to mental health included engaging in physical activities, enhancing self‐care and mindfulness, establishing good sleep hygiene, spending time with their children, and listening to their children to enhance their mental resilience. 3.3.3 Lack of support system Lack of social, physical and emotional support from partner, family members or social connections were recognised barriers to accessing and utilising health information and services for study participants. This lack of support contributed to the women's personal barriers and reduced their ability to actively manage their own health. These barriers were related to pregnancy and motherhood responsibilities (young children, child‐related responsibilities, pregnancy‐related physical and hormonal changes, poor sleep hygiene, post‐pregnancy changes/complications etc.). In addition, lack of time due to added responsibilities, high workload, lack of structural support, existing NCDs, and children's tantrums and attitudes were some of the other barriers that the women described as negatively impacting their engagement in healthy lifestyle practices. 3.4 Trust in health services and social connections Participants described their general practitioners, followed by obstetricians and midwives as a vital source of information for them. Whilst child health nurses and Child Health and Parenting Services were recognised as vital sources of information for their children's health. However, certain factors influenced the participant's trust in health care providers and health services. The factors are described in two distinct subthemes. 3.4.1 Skills of health care providers The health care providers skills such as communication, empathy, support, reassurance, knowledge and confidence were essential factors that had an influence on the women's trust in health services. Participants described that a lack of empathy, feeling judged, poor listening skills, unsupportive attitudes, lack of transparency and use of jargonistic language all negatively influenced their trust and relationship with their health care providers. In contrast, reassurance and a supportive attitude from health care providers positively influenced the women's trust in health services. 3.4.2 Recommended by family members or social connections The participants' social support system influenced their trust in health services. Women usually trusted the health services and health care providers recommended by their family or social connections and were more trusting of services if they were run by the government. Perceived knowledge and awareness of NCDs and their causative factors The participants described NCDs as long‐term conditions that cannot be cured and require long‐term and lifelong management. The women demonstrated varied knowledge and awareness of the various NCDs and associated risk factors, which was influenced by their profession, personal history of NCDs or history of NCDs in their family or social circles. Eight of the 20 participating women had a personal history of NCDs, whilst others had a history of NCDs in their family or social circle. I guess I probably have a limited understanding, if I'm honest, I understand that chronic conditions are ones that are long term, so don't go away as opposed to kind of acute conditions, which I guess, short term ones. I didn't know a lot about diabetes until I was diagnosed with gestational diabetes. So I've learned a little bit about that condition. [ID 12] I'm generally well versed in chronic conditions that run in my family, both my parents have high blood pressure, my mom's had a number of illnesses over the years, surely cancer, which runs in our family as well. We also have mental health issues in my family. So, my mum has depression, I have depression and anxiety, which I'm just medicated for. [ID 15] The participants' beliefs and knowledge of the various factors that may increase the risk of developing NCDs for themselves and their children could be grouped into five subthemes: behavioural/lifestyle or modifiable risk factors, genetic or inevitable risk factors, social determinants of health, environmental factors and mental health. 3.1.1 Behavioural/lifestyle or modifiable factors Participants considered behavioural factors such as unhealthy diet, sedentary lifestyle, tobacco, alcohol and substance use, and poor sleep patterns to all be associated with an increased risk of developing NCDs. However, most participants considered unhealthy diet as the key risk factor for NCDs. 3.1.2 Genetic or inevitable risk factors Some participants believed that some NCDs are genetic or inevitable, and thus it can be challenging to prevent them. In addition, they reported that it can be difficult to understand the extent to which the genetic component plays a role in the development of NCDs such as cancers. 3.1.3 Social determinants of health The participants also described how the social determinants of health and the surrounding environment (such as adequate housing, education, socio‐economic status, rurality, access to health services, access to safe physical activity places and availability of healthy food items) could influence the risk of developing NCDs for themselves and their children. 3.1.4 Environmental factors Environmental factors such as air pollution, exposure to wood/coal smoke and harmful chemicals (lead and asbestos) were also recognised as important risk factors for NCDs such as asthma. 3.1.5 Mental health Mental health was considered both a risk factor for NCDs and separate to NCDs. The participants mentioned that mental health issues such as depression and anxiety increase the likelihood of engaging in unhealthy lifestyle practices such as unhealthy eating, alcohol and substance use and physical inactivity and thus would increase the risk of developing NCDs. Behavioural/lifestyle or modifiable factors Participants considered behavioural factors such as unhealthy diet, sedentary lifestyle, tobacco, alcohol and substance use, and poor sleep patterns to all be associated with an increased risk of developing NCDs. However, most participants considered unhealthy diet as the key risk factor for NCDs. Genetic or inevitable risk factors Some participants believed that some NCDs are genetic or inevitable, and thus it can be challenging to prevent them. In addition, they reported that it can be difficult to understand the extent to which the genetic component plays a role in the development of NCDs such as cancers. Social determinants of health The participants also described how the social determinants of health and the surrounding environment (such as adequate housing, education, socio‐economic status, rurality, access to health services, access to safe physical activity places and availability of healthy food items) could influence the risk of developing NCDs for themselves and their children. Environmental factors Environmental factors such as air pollution, exposure to wood/coal smoke and harmful chemicals (lead and asbestos) were also recognised as important risk factors for NCDs such as asthma. Mental health Mental health was considered both a risk factor for NCDs and separate to NCDs. The participants mentioned that mental health issues such as depression and anxiety increase the likelihood of engaging in unhealthy lifestyle practices such as unhealthy eating, alcohol and substance use and physical inactivity and thus would increase the risk of developing NCDs. Social determinants of health and the surrounding environment Participants described the social determinants of health, and how their surrounding environment influenced their access to health information and services and their engagement in healthy lifestyle practices for active health management. The determinants and factors are grouped into five subthemes. 3.2.1 Upstream factors The participants considered several upstream factors (factors considered to be out of the control of the women) which are crucial in facilitating timely access to health information and health services. These factors mainly included having a university education or health care background which resulted in the possession of the critical appraisal and research skills necessary to access and interpret credible health information. Participants also recognised that having a stable income resulted in safe housing, good access to healthy nutrition and a supportive environment, and access to private health insurance which in turn resulted in timely access to quality health services for women and their children. However, the differences between private and public health care, such as differences in the wait times and quality of health services, acted as a crucial challenge for participants and compounded their perception that health inequities exist in Tasmania. Participants also acknowledged that upstream factors such as social determinants of health (low education level, high cost of living, high cost of healthy food items, transport, rurality and lack of enabling environment); overload of information or misinformation; environmental factors (harsh climate and unfavourable weather conditions); and commercial determinants of health (unethical marketing and advertisement strategies by fast‐food companies) impact their access to health information, utilisation of health services and engagement in healthy lifestyle practices. 3.2.2 Healthy family environment and parental role modelling Participants believed that a supportive family environment and parental modelling were critical in enabling them and their children to adopt healthy lifestyle practices. Women suggested that a parents' lifestyle can have an influence on their children's lifestyles and their engagement in healthy lifestyle practices. 3.2.3 Lack of health literacy responsive environment Participants experienced a lack of responsiveness to their health needs from health services in Tasmania which they perceived impacted their access to health information and health services. This was due to the high cost of health services, lack of availability and long wait times associated with gaining access to specialist health services. A perceived lack of information about available health services and their utilisation, non‐uniformity of the health system within Tasmania and Australia, lack of routine follow up from the health services and lack of children friendly physical activity spaces were also raised as concerns by the participants. In addition, the lack of locally relevant information about what to expect during and after pregnancy, which health services women and their children can access, and what to feed their children acted as additional challenges for women. Further, women also experienced a lack of awareness about NCDs specific to pregnancy, such as gestational diabetes and postnatal depression and their future impact these conditions may have on the health of women and their children. 3.2.4 Self‐efficacy of women to engage in healthy lifestyle practices The participants reported that a woman's self‐efficacy was essential to them for engaging in healthy lifestyle practices and actively managing health for themselves, their families and their children. The self‐efficacy of participating women was influenced by their education level and stable income. Women who were aware of the future risk of developing NCDs due to family or personal history were mindful of making healthy lifestyle choices (avoiding NCDs risk factors and ensuring timely access to preventive health services). In addition, many of the participants encouraged home‐cooked meals and engaged their children in cooking to promote healthy eating from the early stages of life. 3.2.5 Internet and digital environment The internet (websites, social media and peer‐reviewed journal articles) was the primary and the major source of health information referred to by our study participants. The websites ranged from Australian or State Government Department of health websites, health organisations websites (Raising Children Network, Healthline direct and Families Tasmania), university websites, and hospital websites (Royal Children Hospital Melbourne or Sydney, and Women hospitals in Melbourne or Sydney). Whilst on social media, women mainly accessed information from mothers' groups or social media accounts of health organisations or health professionals. The websites managed by government and health organisations were considered reliable and trustworthy sources, whilst personal opinions, chats or blogs on social media were considered non‐reliable sources of information. In addition, the participants reported that the Raising Children Network was the source they used most to access health information for their children and was considered reliable by the study participants. Upstream factors The participants considered several upstream factors (factors considered to be out of the control of the women) which are crucial in facilitating timely access to health information and health services. These factors mainly included having a university education or health care background which resulted in the possession of the critical appraisal and research skills necessary to access and interpret credible health information. Participants also recognised that having a stable income resulted in safe housing, good access to healthy nutrition and a supportive environment, and access to private health insurance which in turn resulted in timely access to quality health services for women and their children. However, the differences between private and public health care, such as differences in the wait times and quality of health services, acted as a crucial challenge for participants and compounded their perception that health inequities exist in Tasmania. Participants also acknowledged that upstream factors such as social determinants of health (low education level, high cost of living, high cost of healthy food items, transport, rurality and lack of enabling environment); overload of information or misinformation; environmental factors (harsh climate and unfavourable weather conditions); and commercial determinants of health (unethical marketing and advertisement strategies by fast‐food companies) impact their access to health information, utilisation of health services and engagement in healthy lifestyle practices. Healthy family environment and parental role modelling Participants believed that a supportive family environment and parental modelling were critical in enabling them and their children to adopt healthy lifestyle practices. Women suggested that a parents' lifestyle can have an influence on their children's lifestyles and their engagement in healthy lifestyle practices. Lack of health literacy responsive environment Participants experienced a lack of responsiveness to their health needs from health services in Tasmania which they perceived impacted their access to health information and health services. This was due to the high cost of health services, lack of availability and long wait times associated with gaining access to specialist health services. A perceived lack of information about available health services and their utilisation, non‐uniformity of the health system within Tasmania and Australia, lack of routine follow up from the health services and lack of children friendly physical activity spaces were also raised as concerns by the participants. In addition, the lack of locally relevant information about what to expect during and after pregnancy, which health services women and their children can access, and what to feed their children acted as additional challenges for women. Further, women also experienced a lack of awareness about NCDs specific to pregnancy, such as gestational diabetes and postnatal depression and their future impact these conditions may have on the health of women and their children. Self‐efficacy of women to engage in healthy lifestyle practices The participants reported that a woman's self‐efficacy was essential to them for engaging in healthy lifestyle practices and actively managing health for themselves, their families and their children. The self‐efficacy of participating women was influenced by their education level and stable income. Women who were aware of the future risk of developing NCDs due to family or personal history were mindful of making healthy lifestyle choices (avoiding NCDs risk factors and ensuring timely access to preventive health services). In addition, many of the participants encouraged home‐cooked meals and engaged their children in cooking to promote healthy eating from the early stages of life. Internet and digital environment The internet (websites, social media and peer‐reviewed journal articles) was the primary and the major source of health information referred to by our study participants. The websites ranged from Australian or State Government Department of health websites, health organisations websites (Raising Children Network, Healthline direct and Families Tasmania), university websites, and hospital websites (Royal Children Hospital Melbourne or Sydney, and Women hospitals in Melbourne or Sydney). Whilst on social media, women mainly accessed information from mothers' groups or social media accounts of health organisations or health professionals. The websites managed by government and health organisations were considered reliable and trustworthy sources, whilst personal opinions, chats or blogs on social media were considered non‐reliable sources of information. In addition, the participants reported that the Raising Children Network was the source they used most to access health information for their children and was considered reliable by the study participants. Social networks, family connections and peer support as health navigator The study participants recognised that their family and social connections and their support networks were crucial to managing their health. The influence of the peer, family and social support system on the health of study participants is described using the following subthemes. 3.3.1 Easy access to health information and services Family members and social connections (social circle or mother groups) were important sources of information for study participants. Family and social connections provided women with their primary source of information on which health services to access. Women considered having a partner, family member or friend as a health professional or in the health system a privilege. Further, having a family member or a friend practising in the health sector was considered an enabler in accessing health information and services promptly, as it provided them with insider access to the health system. 3.3.2 Ensuring good mental health and wellbeing for women and their children The participants considered good mental health and wellbeing as crucial factors in their ability to engage in healthy lifestyle practices. Women described that establishing meaningful family and social connections were essential to ensuring good mental health and wellbeing for themselves and their children. Other factors which positively contributed to mental health included engaging in physical activities, enhancing self‐care and mindfulness, establishing good sleep hygiene, spending time with their children, and listening to their children to enhance their mental resilience. 3.3.3 Lack of support system Lack of social, physical and emotional support from partner, family members or social connections were recognised barriers to accessing and utilising health information and services for study participants. This lack of support contributed to the women's personal barriers and reduced their ability to actively manage their own health. These barriers were related to pregnancy and motherhood responsibilities (young children, child‐related responsibilities, pregnancy‐related physical and hormonal changes, poor sleep hygiene, post‐pregnancy changes/complications etc.). In addition, lack of time due to added responsibilities, high workload, lack of structural support, existing NCDs, and children's tantrums and attitudes were some of the other barriers that the women described as negatively impacting their engagement in healthy lifestyle practices. Easy access to health information and services Family members and social connections (social circle or mother groups) were important sources of information for study participants. Family and social connections provided women with their primary source of information on which health services to access. Women considered having a partner, family member or friend as a health professional or in the health system a privilege. Further, having a family member or a friend practising in the health sector was considered an enabler in accessing health information and services promptly, as it provided them with insider access to the health system. Ensuring good mental health and wellbeing for women and their children The participants considered good mental health and wellbeing as crucial factors in their ability to engage in healthy lifestyle practices. Women described that establishing meaningful family and social connections were essential to ensuring good mental health and wellbeing for themselves and their children. Other factors which positively contributed to mental health included engaging in physical activities, enhancing self‐care and mindfulness, establishing good sleep hygiene, spending time with their children, and listening to their children to enhance their mental resilience. Lack of support system Lack of social, physical and emotional support from partner, family members or social connections were recognised barriers to accessing and utilising health information and services for study participants. This lack of support contributed to the women's personal barriers and reduced their ability to actively manage their own health. These barriers were related to pregnancy and motherhood responsibilities (young children, child‐related responsibilities, pregnancy‐related physical and hormonal changes, poor sleep hygiene, post‐pregnancy changes/complications etc.). In addition, lack of time due to added responsibilities, high workload, lack of structural support, existing NCDs, and children's tantrums and attitudes were some of the other barriers that the women described as negatively impacting their engagement in healthy lifestyle practices. Trust in health services and social connections Participants described their general practitioners, followed by obstetricians and midwives as a vital source of information for them. Whilst child health nurses and Child Health and Parenting Services were recognised as vital sources of information for their children's health. However, certain factors influenced the participant's trust in health care providers and health services. The factors are described in two distinct subthemes. 3.4.1 Skills of health care providers The health care providers skills such as communication, empathy, support, reassurance, knowledge and confidence were essential factors that had an influence on the women's trust in health services. Participants described that a lack of empathy, feeling judged, poor listening skills, unsupportive attitudes, lack of transparency and use of jargonistic language all negatively influenced their trust and relationship with their health care providers. In contrast, reassurance and a supportive attitude from health care providers positively influenced the women's trust in health services. 3.4.2 Recommended by family members or social connections The participants' social support system influenced their trust in health services. Women usually trusted the health services and health care providers recommended by their family or social connections and were more trusting of services if they were run by the government. Skills of health care providers The health care providers skills such as communication, empathy, support, reassurance, knowledge and confidence were essential factors that had an influence on the women's trust in health services. Participants described that a lack of empathy, feeling judged, poor listening skills, unsupportive attitudes, lack of transparency and use of jargonistic language all negatively influenced their trust and relationship with their health care providers. In contrast, reassurance and a supportive attitude from health care providers positively influenced the women's trust in health services. Recommended by family members or social connections The participants' social support system influenced their trust in health services. Women usually trusted the health services and health care providers recommended by their family or social connections and were more trusting of services if they were run by the government. DISCUSSION This study focused on understanding the health literacy needs of pregnant women and mothers with young children in Tasmania. Additionally, it explored pregnant women and mother's knowledge and beliefs about the impact of NCDs and associated risk factors on women and their children's health. This study identified that pregnant women and mothers demonstrated good knowledge and awareness about NCDs and associated risk factors but experienced numerous health literacy strengths and challenges which influenced their access to health care and engagement in healthy lifestyle practices. The strengths and challenges were diverse and were mainly related to the social determinants of health and the surrounding environment, social networks and support system, and trust in health services and social connections. The themes were observed to be interconnected. The findings reinforce that the majority of lifestyle factors are out of control of women and thus, urges the health system and policy makers to shift away from the ‘blame game’ and instead focus on addressing the broader social, ecological, cultural and commercial determinants of health in order to support women to engage in healthy lifestyle practices for themselves and their families. The study highlights the need to prioritise health literacy development at the individual, community and organisational level to address the evolving needs of women during pregnancy and beyond. Such a focus will support, empower and enable women and future generations to reduce the intergenerational impact of NCDs in Tasmania. Specifically, the research reinforces how the various risk factors are interactive and can be collectively addressed with a focus on health literacy development and consideration of the socio ecological NCD model (Figure ). Women in this study demonstrated good knowledge and diverse beliefs about NCDs and associated risk factors. Women emphasised the importance of behavioural risk factors and social determinants of health and demonstrated a good understanding of the impact of NCDs risk factors on themselves, and their children. The findings are consistent with a cross‐sectional cohort study by Irani et al. in which participating women (mainly university educated) in Switzerland demonstrated good awareness about NCDs. However, the study was not explicitly focused on pregnant women and mothers. It is also important to note that in our study and the study by Irani et al., most women were from socio‐economically advantaged backgrounds, thus more research to understand NCDs specific knowledge and beliefs of pregnant women and mothers from areas of relative socio‐economic disadvantage is required. Further, people experiencing socio‐economic disadvantage are more likely to experience significant health literacy challenges in managing health and using preventive health services. This finding urges health services and policy makers to shift from ‘one‐size‐fits all’ approach and invest extra efforts to develop ‘fit‐for‐purpose’ solutions with women experiencing socio‐economic disadvantage to ensure that all women are supported to engage in healthy lifestyle practices and have adequate and timely access to quality health care to attain the highest achievable level of health and wellbeing for themselves and future generations. Women in this study experienced diverse challenges and strengths which influenced their ability to actively manage their health and navigate available health information and the health system. Women highlighted that pregnancy and motherhood are challenging as it comes with added responsibilities and numerous physical and emotional changes which affected their engagement in self‐care and healthy lifestyle practices. Thus, emphasising the need for additional social, physical and emotional support during this period to meet those additional needs. Participants also highlighted the need for services that were more health literacy responsive. The WHO recommends building health literacy‐responsive health systems using co‐design principles to accommodate diverse health literacy needs, lifelong experiences and social and cultural practices which are often beyond the control of individuals. The responsive health systems can help to design people‐centred solutions, sustain user engagement, promote health equity and make services more resilient by ensuring that the health information and health services are easy to access, navigate and use. The health literacy responsive environment can support the cocreation of enabling environment that decrease people's exposure to NCD risk factors propagated by commercial parties with vested interests through misinformation, disinformation, and advertising. Therefore, it can help to reduce the burden of NCDs by effectively supporting women to engage in health lifestyle practices before, during and after pregnancy and by ensuring that health services are user‐friendly and easy to navigate. Trust in health care providers and the health system was another vital element raised by our study participants. Inadequate skills of health care providers, such as the use of jargonistic communication and lack of empathy, negatively influenced the women's trust in health services. The concept of health literacy is closely linked with trust in the health system. Further, trust in health care providers is associated with increased patient satisfaction and improved health outcomes and behaviours. In addition, empathy and clear communication are important determinants of trust in health care providers. , Thus, capacity building of health care providers is crucial to enhance their skills to deliver quality and equitable health care by effectively responding to the needs of pregnant women and mothers regardless of their health literacy and socio‐economic status. Information overload or misinformation on digital media and a lack of locally relevant and context‐specific information around pregnancy and motherhood was a critical challenge identified by the study participants. Information overload and misinformation contribute to stress, can be potentially harmful, leading to less healthy behaviours, and diminished decision‐making ability amongst pregnant women and mothers. This is an essential finding for policymakers and health professionals and highlight the need to design and evaluate pregnancy and motherhood‐related and gender‐specific NCDs prevention resources. These resources should be made relevant to the Tasmanian context in order to support and empower pregnant women and mothers in Tasmania to engage in healthy lifestyle practices during and beyond pregnancy. Participants highlighted that a healthy family environment and parental role modelling positively influenced their and their children's engagement in healthy lifestyle practices. A healthy family environment is associated with improved parent–child communication, increased uptake of healthy lifestyle practises and improved mental health and wellbeing. In addition, children are more likely to follow healthy lifestyle practices if their parents or carers engage in healthy lifestyle practices daily. , Participants also identified that their self‐efficacy and mental health were essential factors in engaging in healthy lifestyle practice. A recent scoping review by Ho et al. suggests that mothers' health behaviour and self‐efficacy as primary caregivers are more likely to influence their child's health behaviour than if the father was the primary caregiver. Further inadequate mental health and wellbeing of women during and after pregnancy makes them vulnerable to other NCDs and increases the likelihood of their children developing mental ill health. These findings emphasise the importance of using a family‐centred approach to enhance the family environment, self‐efficacy, and mental health of pregnant women and mothers so that all mothers and their children feel supported and equipped to engage in healthy lifestyle practices from the early stages of life. Women in this study used a variety of information sources such as the internet and digital technology, family and social connections and health care providers. Women demonstrated confidence in using the internet and digital technology (government websites, social media, peer‐reviewed journal articles) and had considerable skills to differentiate between credible and non‐credible information sources. These skills may be attributed to the high educational attainment of our study participants, as there is an association between high educational attainment and significant health literacy skills. These findings are consistent with the study by Lupton, where pregnant women and mothers in Sydney placed a high value on the internet and technology to obtain timely access to health information and for establishing and enhancing their social relationships and connection with other mothers. In addition, women expressed their interest in having easy and spontaneous access to information and support from health care providers via telehealth. However, socio‐economically disadvantaged and Indigenous pregnant women and mothers in New Zealand preferred and were more comfortable with face‐to‐face and personalised interaction with health care providers (nurses, midwives and GPs) rather than the internet and technology for health information. This is an important finding for health systems, policymakers and health care providers as it highlights the importance of moving away from the notion of ‘one size fits all’ approach. This reinforces the importance of a focus on understanding the local context and health needs of women from different socio‐economic backgrounds and the need to codesign digital solutions accordingly to overcome locally identified barriers to meet those needs effectively. In this study, family and social connections were the most critical and trustful source of health information and were considered a facilitating factor in accessing and utilising health services. A lack of family and social connections/support was a challenge for women when accessing health information and services. Previous research has shown that a woman's family and social connections can mediate the health outcomes for women and their children by influencing each woman's health decision‐making and access to health services. , The concept of distributed health literacy acknowledges that health literacy is distributed across families, social connections and communities and can positively or negatively influence the health outcomes for individuals and communities. This research finding illuminates the importance of social connections and highlights the importance of addressing family, social and cultural factors that influence the health literacy and health behaviours of pregnant women and mothers in Tasmania. By strengthening the social capital and leveraging the positive factors within local communities it may be possible to enhance the social, peer and community support and community resilience, which in turn may support more women and their families to engage in healthy lifestyle practices. 4.1 Strengths and limitations This is the first known study to qualitatively explore the health literacy strengths and challenges; and NCDs‐specific knowledge and beliefs of pregnant women and mothers with young children in Tasmania. The study findings add depth and meaning to the findings from the recent HLQ study of this population group. This study also enhances our understanding of the varying health literacy challenges that women in Tasmania experience when attempting to access and use health information and health services to actively manage their health and health of the future generation. Further, this study provides a knowledge template which may be translatable to other contexts nationally and internationally (mostly in high income countries) to understand health literacy and broader needs of pregnant women and mothers. The major limitation of this study is the use of purposive sampling strategy to recruit women from the subset of participants of the HLQ survey. This led to an over‐representation of women with higher levels of education (mostly university educated) with limited cultural diversity. It is highly likely that women with lower educational attainment and experiencing relative socio‐economic disadvantage may face more significant health literacy challenges than the women included in this study. Further, interviews were conducted online via Zoom which may have impacted the participation of women with limited access to the internet and digital technology or limited digital literacy Therefore, future research must seek to engage a more diverse sample of women or focus specifically on disadvantaged populations using multiple recruitment methods as per needs of the participants to determine if their NCDs knowledge and health literacy needs are comparable and secondly to effectively gather their perspectives to co‐design relevant and responsive solutions to address the needs of these understudied women. Strengths and limitations This is the first known study to qualitatively explore the health literacy strengths and challenges; and NCDs‐specific knowledge and beliefs of pregnant women and mothers with young children in Tasmania. The study findings add depth and meaning to the findings from the recent HLQ study of this population group. This study also enhances our understanding of the varying health literacy challenges that women in Tasmania experience when attempting to access and use health information and health services to actively manage their health and health of the future generation. Further, this study provides a knowledge template which may be translatable to other contexts nationally and internationally (mostly in high income countries) to understand health literacy and broader needs of pregnant women and mothers. The major limitation of this study is the use of purposive sampling strategy to recruit women from the subset of participants of the HLQ survey. This led to an over‐representation of women with higher levels of education (mostly university educated) with limited cultural diversity. It is highly likely that women with lower educational attainment and experiencing relative socio‐economic disadvantage may face more significant health literacy challenges than the women included in this study. Further, interviews were conducted online via Zoom which may have impacted the participation of women with limited access to the internet and digital technology or limited digital literacy Therefore, future research must seek to engage a more diverse sample of women or focus specifically on disadvantaged populations using multiple recruitment methods as per needs of the participants to determine if their NCDs knowledge and health literacy needs are comparable and secondly to effectively gather their perspectives to co‐design relevant and responsive solutions to address the needs of these understudied women. CONCLUSION Pregnant women and mothers with young children demonstrated good knowledge and awareness of NCDs and associated risk factors. However, they experience varying health literacy strengths and challenges at the individual, community and health system level which influences their access to and use of health information and health services. Further, the key social determinants of health (educational attainment, stable income, surrounding environment), health literacy responsiveness of the health system, family environment, parental role‐modelling, self‐efficacy of mothers, availability of locally relevant and context‐specific health information, social/family support and trust in the health system all influenced the women's engagement with the health system and healthy lifestyle practices. This impacts their engagement with healthy lifestyle practices and their ability to actively manage their health and the health of their children. Therefore, highlighting an urgent need to address socio‐ecological determinants of health to enable women to improve their health and their children's health. Women also highlighted the need for extra physical, social and emotional support during this critical period to improve their health and the health of their children and our future generations. Addressing the diverse health literacy needs of this priority population by involving them in the planning, design, implementation and evaluation of solutions could help to optimise women and their children's access to locally relevant, culturally sensitive and health literacy responsive services. Further, supporting healthy lifestyle practices through locally informed health literacy development strategies combined with the creation of enabling service environments will be crucial to reduce the growing burden of NCDs in Tasmania and globally. None. Dr Rosie Nash is Editorial Board member of Health Promotion Journal of Australia and co‐author of this article. To minimise bias, they were excluded from all editorial decision‐making related to the acceptance of this article for publication. The project received ethics approval from the Tasmania Health and Medical Human Research Ethics Committee (Ethics approval number H0023036). All participants were required to read an information sheet and give electronic and verbal consent prior to admission to the interview. Data S1: Supporting Information. |
Scoping review of happiness and well-being measurement: uses and implications for paediatric surgery in low- and middle-income contexts | 1c9413ee-d955-41d8-b5f6-a8b6961c9b07 | 11808877 | Surgical Procedures, Operative[mh] | The critical need for paediatric surgical interventions in low- and middle-income countries (LMICs) is well documented, with significant health implications highlighted by recent studies. A 2017 study by Butler et al found that roughly 85% of children in these regions are likely to need surgical care before they reach 15 years. This high percentage highlights the urgent need for accessible paediatric surgical services in these areas. In 2015, Bickler et al estimated that per year, over 77.2 million disability-adjusted life years (DALYs) could be saved through essential surgical procedures. Furthermore, the WHO estimated that, in 2019, 51.8 million DALYs were due to congenital abnormalities, ranking congenital abnormalities as the 10th leading cause of DALYs in 2019. Globally, addressing general surgical needs has been the focus of a large number of organisations. In 2016, Ng-Kamstra et al identified 403 surgical non-governmental organisations (s-NGOs) working across 139 LMICs. However, adequate capacity for paediatric surgical intervention remains a problem in LMIC contexts. Furthermore, Krishnaswami et al argue that robust needs assessments and standardised measures of impact and quality of care should guide effective partnerships between s-NGOs and local institutions in LMICs. Despite efforts to improve paediatric care, the lack of a standard method for evaluating postsurgical happiness and well-being complicates resource allocation. Conventional outcome measures, such as clinician-reported and observer-reported measures, often fail to consider critical aspects like happiness and well-being. This highlights the need for tailored patient-reported outcome measures and patient-reported experience measures that can more accurately address these dimensions. Many health interventions’ economic assessments rely heavily on gross domestic product (GDP) as the primary measure. GDP, which quantifies the monetary value of all goods and services produced within a country, correlates with specific health outcomes related to better infrastructure and services available in wealthier nations. However, GDP does not account for non-financial aspects of happiness and well-being, leading to a shift towards more holistic measures that consider the broader impacts of medical interventions. While many indices evaluate well-being at the international and national levels, they often fail to connect to medical intervention contexts directly. This multitude of options has led to significant ambiguity across evaluations of intervention outcomes. Without consistent measures, it is challenging to compare the efficacy of different interventions or to justify the distribution of resources, particularly in LMICs. This lack of standardisation hinders the ability to identify areas most needing improved surgical services. The current evaluation methods often yield ambiguous data, lacking spatial and temporal precision. Due to this ambiguity, the effect of increasing investment in global surgical capacity remains largely unknown, hindering the identification of specific areas within countries most in need. Further compounding these issues, the specificity of well-being evaluations varies—general well-being indices may consider a greater breadth of well-being domains. In contrast, health-related well-being assessments may only consider indicators directly associated with physical or mental health. This diversity in methodologies, from the specific surgical impact assessments to broader happiness indices, emphasises the need for a more integrated and comprehensive evaluation tool that links paediatric surgical intervention to greater individual and population well-being. This scoping review aims to map and compare existing happiness and well-being indices methodologies and examine their application to paediatric surgical interventions in LMICs. This review seeks to highlight effective practices and identify gaps or overlooked measures by organising and contrasting different methods. The insights garnered are intended to support the development of industry standards for assessing paediatric surgical needs and inform policymakers about the broader implications of healthcare disparities. Ultimately, this could lead to a more informed and equitable allocation of healthcare resources, enhancing the well-being of children in LMICs. Rather than a traditional systematic review focussed on outcomes, a scoping review was conducted to explore existing methodologies for happiness and well-being indices and examine their application to paediatric surgical interventions in LMICs. Scoping reviews follow five key steps: identifying the research question, identifying relevant studies, selecting the study, charting the data, and data summary and synthesis. This study adhered to the methodological framework developed by the Joanna Briggs Institute, along with the methodological updates by Peters et al , and followed the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines. Stage 1: identifying the research question The principal research question of this scoping review is: which methodologies are currently used to measure happiness and well-being in populations? In line with Peters et al .’s guidance, a secondary research question was included to specifically address the context of paediatric surgical interventions in LMICs: how are indicators of happiness and well-being used to assess the needs and impacts of paediatric surgical interventions in LMICs? Stage 2: identifying relevant studies We conducted our literature search via multidisciplinary electronic databases, including PubMed, ScienceDirect and Google Scholar, and bibliographies of relevant studies . Our search was limited to literature published in English no earlier than the year 2000. The time restriction ensures the inclusion of studies with up-to-date data, modern healthcare frameworks and relevant evaluation tools that are essential for assessing current global health challenges. We included search terms relating to happiness and well-being, happiness index methodologies, well-being index methodologies, surgical intervention, paediatric surgical intervention, LMICs, happiness in health and development, and surgical intervention impact. These search terms were selected to capture literature relating to existing methodologies for happiness and well-being indices used in global health, LMIC contexts, paediatric surgical contexts and any literature tangentially associated with these topics. The search terms were used consistently across all three databases, and two researchers searched each database separately. This search strategy was initiated in October 2023 and continued until April 2024. In some cases, particularly for sources surrounding national and international well-being indices, a secondary search was conducted to clarify methodologies. Any sources identified in this secondary search went through the same data charting steps as sources from the initial search. Stage 3: study selection Studies were included if they met at least one of the specified inclusion criteria and excluded if they met any exclusion criteria, ensuring they were relevant to the research questions, especially regarding LMICs, surgical procedures and well-being or happiness outcomes. Eligible sources included research articles, review articles and technical reports. Inclusion criteria Paediatric surgical interventions: studies on paediatric surgical interventions specifically in LMIC settings, particularly those that assess well-being or happiness outcomes. Health and well-being measurement: research in global health contexts that involves measurements of well-being or happiness, including subjective and objective approaches. Happiness and well-being indices: methodologies or indices (eg, Gross National Happiness) measuring happiness or well-being, emphasising applications in LMICs and relation to paediatric health and surgery. Surgical needs and outcomes assessments: studies assessing surgical needs or outcomes in paediatric surgery, focussing on well-being and happiness as measured or implied outcomes. Specific conditions and populations: studies targeting particular conditions (eg, congenital heart disease (CHD), cleft lip and palate (CLP)) within LMICs and their impact on well-being and happiness in children. Exclusion criteria Studies published in a language other than English. Studies with unclear methodologies or data sources. Undefined or poorly defined concepts of happiness or well-being, when applicable. Studies primarily focussed on socioeconomic indices. Studies emphasising surgical techniques over intervention outcomes. Studies published before the year 2000. Stage 4: charting the data Two reviewers (JH and CP) independently extracted data, including study title, authors, publication year, country of origin, aims, population, sample size, methodology and key findings. This approach was piloted in three studies to ensure the extraction was consistent with the research question (CD). Data for each category was compiled into an Excel spreadsheet for validation and coding. Stage 5: data summary and synthesis The fifth and final stage summarises and reports findings, which are presented in the subsequent section. Patient and public involvement No patient or public level data were used in this study. The principal research question of this scoping review is: which methodologies are currently used to measure happiness and well-being in populations? In line with Peters et al .’s guidance, a secondary research question was included to specifically address the context of paediatric surgical interventions in LMICs: how are indicators of happiness and well-being used to assess the needs and impacts of paediatric surgical interventions in LMICs? We conducted our literature search via multidisciplinary electronic databases, including PubMed, ScienceDirect and Google Scholar, and bibliographies of relevant studies . Our search was limited to literature published in English no earlier than the year 2000. The time restriction ensures the inclusion of studies with up-to-date data, modern healthcare frameworks and relevant evaluation tools that are essential for assessing current global health challenges. We included search terms relating to happiness and well-being, happiness index methodologies, well-being index methodologies, surgical intervention, paediatric surgical intervention, LMICs, happiness in health and development, and surgical intervention impact. These search terms were selected to capture literature relating to existing methodologies for happiness and well-being indices used in global health, LMIC contexts, paediatric surgical contexts and any literature tangentially associated with these topics. The search terms were used consistently across all three databases, and two researchers searched each database separately. This search strategy was initiated in October 2023 and continued until April 2024. In some cases, particularly for sources surrounding national and international well-being indices, a secondary search was conducted to clarify methodologies. Any sources identified in this secondary search went through the same data charting steps as sources from the initial search. Studies were included if they met at least one of the specified inclusion criteria and excluded if they met any exclusion criteria, ensuring they were relevant to the research questions, especially regarding LMICs, surgical procedures and well-being or happiness outcomes. Eligible sources included research articles, review articles and technical reports. Inclusion criteria Paediatric surgical interventions: studies on paediatric surgical interventions specifically in LMIC settings, particularly those that assess well-being or happiness outcomes. Health and well-being measurement: research in global health contexts that involves measurements of well-being or happiness, including subjective and objective approaches. Happiness and well-being indices: methodologies or indices (eg, Gross National Happiness) measuring happiness or well-being, emphasising applications in LMICs and relation to paediatric health and surgery. Surgical needs and outcomes assessments: studies assessing surgical needs or outcomes in paediatric surgery, focussing on well-being and happiness as measured or implied outcomes. Specific conditions and populations: studies targeting particular conditions (eg, congenital heart disease (CHD), cleft lip and palate (CLP)) within LMICs and their impact on well-being and happiness in children. Exclusion criteria Studies published in a language other than English. Studies with unclear methodologies or data sources. Undefined or poorly defined concepts of happiness or well-being, when applicable. Studies primarily focussed on socioeconomic indices. Studies emphasising surgical techniques over intervention outcomes. Studies published before the year 2000. Two reviewers (JH and CP) independently extracted data, including study title, authors, publication year, country of origin, aims, population, sample size, methodology and key findings. This approach was piloted in three studies to ensure the extraction was consistent with the research question (CD). Data for each category was compiled into an Excel spreadsheet for validation and coding. The fifth and final stage summarises and reports findings, which are presented in the subsequent section. No patient or public level data were used in this study. As depicted in , the study selection process began with identifying 51 records, 48 sourced through database searches and 3 through other means. After removing duplicates, all 51 unique records were screened, with none being excluded at this stage. Subsequently, the full texts of these 51 records were assessed for eligibility. During this phase, 23 studies were excluded for various reasons, including focussing on the economic impacts of surgery rather than well-being, clinical techniques for specific procedures rather than the broader impact of the intervention or inapplicability to the LMIC context. Ultimately, 28 studies met the inclusion criteria and were included in the qualitative and quantitative synthesis . Among the studies included 39% focussed exclusively on lower-middle-income countries, 14% on upper-middle-income countries and 10% on high-income countries. An additional 25% covered multiple income categories, including high- and low-income settings, providing a broader global perspective. Furthermore, 10% of studies were theoretical or methodological, with no specific geographic sample. Geographically, South Asia appeared most frequently, with representation in nine studies, followed by Southeast Asia (three studies) and Europe (two studies). North America and East Asia were the least represented, with only one study each. Additionally, eight studies covered multiple regions, and four had no specific geographic focus due to their study design. In terms of well-being indices, the Bhutan Gross National Happiness Index (BGNHI), Organisation for Economic Co-Operation and Development Better Life Index (OECD BLI), Human Development Index (HDI) and Healthy Planet Index (HPI) were the most frequently referenced. However, none specifically addressed surgical or child-centred measures. These general indices were chosen for their broad applicability to global health. The review identified two primary types of well-being measures: subjective and objective. Subjective measures were used in 18% of studies, objective measures in another 18% and a combination of both in 64%, illustrating diverse methodological approaches. Health emerged as a critical indicator in 27 studies (96%), underscoring its central role in assessing well-being and happiness outcomes. Methodologies currently used to measure happiness and well-being in populations When considering methodologies for measuring population happiness and well-being, we identified two categories: subjective and objective well-being. Choon et al identified this distinction as ‘inner’ (subjective) versus ‘outer’ (objective) indicators, meaning indicators that relate to an individual’s perceived emotional or physical experience and indicators related to an individual’s environment or physical state, respectively. Many existing indices that measure well-being within global health and development contexts focus primarily on objective indicators. Several prominent non-GDP-based indices and their methodologies are outlined as follows. Bhutan Gross National Happiness The BGNHI is drawn from national survey data and based on four Bhutanese principles: sustainable and equitable economic development, conservation of the environment, preservation and promotion of culture, and good governance. The Gross National Happiness Index (GNHI) was explicitly developed to provide a more holistic alternative to GDP for measuring national well-being and success. Bhutan was the first nation to include happiness as a component of state policy. The index consists of nine equally weighted domains: psychological well-being, health, time use, education, cultural diversity and resilience, good governance, community vitality, ecological diversity and resilience, and living standards. These nine domains consist of 33 clustered indicators with 124 variables of differing weights— objective indicators are given higher weights, while subjective indicators are given lower weights. While some indicators resemble those of other well-being indices (literacy rates, education, etc), the GNHI is unique in that Bhutan’s values and traditions are reflected in several indicators, such as respect for the sacredness of nature. It reflects the Bhutanese philosophy about happiness as more than a feeling or emotional state but a concept rooted in the interconnectedness of living beings. The mathematical structure is based on the Alkire Foster method, where a sufficiency level (rather than deprivation level) is attached to each variable. The GNHI is then calculated as a value between 0 and 1 with one of the two following equations: 1 G N H I = 1 - H n A n 2 G N H I = H h + ( H n ∙ A s ) where H h is the proportion of the population with a sufficiency score greater than or equal to 66%; H n is the proportion of the population with a sufficiency score below 66%; A s is the percentage of domains in which people who are not yet happy experience sufficiency (similar to an ‘intensity’ value); finally, A n is the percentage of domains in which not-yet-happy people lack sufficiency. The GNHI has since been cited in several articles exploring happiness and well-being, serving as a basis for new happiness index development. OECD Better Life Index The OECD BLI emphasises two well-being categories: current and future well-being. The framework for current well-being has four features that guide the dimensions of the index: (1) focus on people, meaning the experience and community relations of individuals and households, rather than the economy; (2) focus on well-being outcomes rather than inputs or outputs, assessed by both objective (non-self-reported) and subjective (self-reported) measures; (3) considers the distribution of well-being outcomes across populations; (4) considers subjective experiences as well as objective assessments of well-being. In total, 11 dimensions measure current well-being, including health status, work-life balance, education and skills, social connections, civic engagement and governance, environmental quality, personal security, subjective well-being, income and wealth, jobs and earnings, and housing. Future well-being is assessed through indicators of different types of capital, such as economic, natural, human and social capital, which drive well-being over time. The index has a three-level hierarchical structure in which level 1 comprises the individual indicators that form the 11 dimensions, level 2 comprises the 11 dimensions and level 3 is the OECD BLI. Each indicator value is normalised via the equation : I x = a c t u a l v a l u e - m i n i m u m v a l u e m a x i m u m v a l u e - m i n i m u m v a l u e where the ‘actual value’ is the country value for the indicator, the ‘minimum value’ is the global minimum for the indicator and the ‘maximum value’ is the global maximum for the indicator. A composite index for the education dimension is obtained by averaging the indices for expected years of schooling and mean years of schooling. If the indicator measures a negative value, the normalisation formula is I x = 1 - a c t u a l v a l u e - m i n i m u m v a l u e m a x i m u m v a l u e - m i n i m u m v a l u e The normalised values for all indicators within a dimension are then averaged with equal weights to obtain a single aggregate dimensional value. However, the OECD has not adopted a singular method of aggregating the 11 dimensions to obtain the total OECD BLI value. Instead, the users of the OECD BLI interface can assign dimensional weights manually. This system reflects an ongoing debate surrounding how best to weight complex multidimensional indices—assigning equal weights incorrectly assumes that each dimension has an equal bearing on well-being, but assigning differential weights manipulates results. Balestra et al conducted a study using OECD BLI website data to identify which dimensions are weighted the highest on average. The results show that health, education and life satisfaction are weighted the highest by users of the OECD BLI. Furthermore, a growing body of literature has explored the development of non-compensatory methods to overcome the compensation effect, meaning success in one indicator compensating for a deficit in another indicator in composite indices. Koronakos et al proposed a Multiple Objective Programming assessment framework for the BLI, incorporating public opinion to create weight restrictions, reducing the compensation effect. Carlsen conducted a study using partial data ordering to address the compensation effect in the World Happiness Index (HI), an index calculated by the arithmetic addition of its seven indicators. In doing so, Carlsen considered all seven indicators simultaneously for 157 countries, leading to a different international HI ranking. Thus, the compensation effect is also a concern for how countries are ranked and compared based on their index value. Human Development Index The HDI, initially designed by Mahbub ul Haq in 1990, has been implemented by the United Nations Development Programme to measure global development. The HDI consists of four indicators—life expectancy, expected years of schooling, mean years of schooling and per capita income, making up three dimensions of the HDI: health, education and income. In the same manner as the OECD BLI, indicator variables are normalised and transformed to a unitless index value between 0 and 1 using the following formula : I x = a c t u a l v a l u e - m i n i m u m v a l u e m a x i m u m v a l u e - m i n i m u m v a l u e For the income dimension, the same equation is used but with the natural logarithm of each variable as follows : I x = ln a c t u a l v a l u e - l n ( m i n i m u m v a l u e ) l n ( m a x i m u m v a l u e ) - l n ( m i n i m u m v a l u e ) The HDI is then obtained by averaging the health, education and income indices as follows : H D I = ( I h e a l t h ⋅ I e d u c a t i o n ⋅ I i n c o m e ) 1 3 Unlike the OECD BLI, equal weights are assigned to each dimension of the HDI, leading to the same criticisms of assuming that each parameter matters equally to well-being and concerns over the compensation effect. The HDI has also been criticised for being redundant with other measures for human development or only limitedly useful. Ranis et al conducted a study in which the HDI was tested for correlation with 39 indicators across 11 broad domains of human development. The HDI was only correlated with 8 of the 39 indicators, suggesting it is not a strong indicator for broad human development. However, when the same test was performed against under-five mortality and per capita income, two of the most common development indicators, the HDI performed equally as well as under-five mortality and better than per capita income as a measure of broad human development. Happy Planet Index The HPI measures sustainable well-being through three domains: life expectancy, experienced well-being (average of individual responses to rank oneself on a ladder of life from 0 to 10) and ecological footprint (the average amount of land needed, per person in the population, to sustain typical consumption patterns). The index is calculated as follows: H P I = α ∙ l i f e e x p e c t a n c y ∙ e x p e r i e n c e d w e l l b e i n g + β - γ e c o l o g i c a l f o o t p r i n t + ε where α =0.75 and γ =54.92, both of which are scaling constants, β =2.92, which ensures the coefficient of variance is equivalent for well-being and life expectancy, and ε =6.39, which ensures that the coefficient of variance for ‘ecological footprint’ is equivalent to that of the ‘happy life years’ measure (life expectancy multiplied by experienced well-being). Subjective well-being A growing body of literature emphasises that subjective well-being, meaning well-being as it is identified by the individual and not by ‘objective’ data, is heavily influenced by health status in childhood and throughout life. Arguments have been made for broadening standard population-level health indicators beyond morbidity and mortality to include a third indicator encompassing biological health and ‘lived health’. Stucki and Bickenbach referred to this third indicator as ‘functioning’, intended to capture an individual’s capacity and performance with respect to any physical limitations or health conditions. Thus, health is relevant in subjective well-being, and to capture a holistic measure of population health, a more subjective indicator of ‘lived health’ may be necessary. Subjective well-being is incorporated into existing non-GDP-based well-being indices to varying degrees. The GNHI and the OECD BLI are the most inclusive of subjective measures. However, the GNHI assigns lower weights to indicators with higher levels of subjectivity, and the BLI does not provide any guidance on dimensional weights. The HPI has an ‘experienced well-being’ category. Still, it comprises only one subjective indicator—a‘ladder of life’, which has been shown to possess good convergent validity with other emotional well-being measures, specifically in children. However, indices specific to subjective well-being typically lack objective measures, resulting in a similar loss of holistic measurement. Additionally, the validity of cross-cultural and cross-national comparisons when relying solely on subjective well-being is controversial. The Pemberton Happiness Index (PHI), developed by Hervás and Vásquez, is a subjective well-being-based index tested across multiple regions to validate its consistency across geographic and cultural boundaries. The index was developed to capture both remembered well-being (a retrospective, memory-based assessment) and experienced well-being (a momentary assessment of the active state of well-being). The final structure consists of 11 items to capture general, eudaimonic, hedonic and social domains of remembered well-being and 10 items to capture experienced well-being. The index value is obtained by adding the scores (0–10 scale) from the 11 remembered well-being items with the score for experienced well-being (sum of positive items experienced and negative items not experienced) and then dividing by 12. However, the PHI was not designed to be a national index or a tool in development contexts; instead, it was constructed based on existing indices and measurements for clinical contexts. It does not include any ‘objective’ indicators necessary for creating a holistic index for global health and development needs assessments. Choon et al developed an integrated happiness framework for sustainable development based on existing non-GDP-based happiness and well-being indices and the Positive Emotion, Engagement, Relationships, Meaning and Accomplishment (PERMA) psychology model. It consists of eight outer dimensions (environment, education, governance, culture, community, health, safety and economics) based on the GNHI, the OECD BLI and the Malaysia Happiness Index (there was not sufficient literature or empirical evidence available to warrant inclusion of the Malaysia Happiness Index in our study), and five inner dimensions (positive emotion, engagement, relationships, meaning and accomplishments), based on the PERMA model. To test the model, a questionnaire was created with four sections: happiness and value of life, external environment, positive psychology and demographics. Each dimension of the model consists of three questions, and each question, excluding those of the demographics section, is scored on a 7-point Likert scale. The same normalisation method the OECD BLI and the HDI uses is applied to convert the indicators into indices. Bridging the gap between the objective and subjective has been the focus of an entire branch of literature. van Praag et al assert that subjective well-being or ‘self-reported satisfaction’, is a key tool for developing and assessing socioeconomic policy, claiming that subjective questions and responses may be used as proxies for individual satisfaction, and general domain satisfaction is explainable by objective variables. In other words, rather than treating subjective and objective as two different categories, van Praag et al interlinked them, using a model where general satisfaction, GS, consists of six domains of satisfaction, DS 1 …DS J , which is dependent on observable, objective variables, x . Salameh et al used an ordered logit and tobit model to identify socioeconomic determinants of subjective well-being and found that income, education, government effectiveness, no perceived corruption and perceived institutional quality improve well-being, while lower trust in family and friends, poor health status, living in rented housing and dissatisfaction with hospital services are negatively associated with subjective well-being. Sujarwoto et al conducted a three-level logit regression study to explore a multilevel data structure with individual, household and district data for Indonesia. The findings show that happiness and life satisfaction are significantly associated with household and district government-level and individual factors. Thus, well-being comprises a complex combination of objective and subjective factors spread across individual, household and institutional domains. In nearly all included studies, health is highlighted as a key indicator for happiness and well-being. For example, Salameh et al identified poor individual-level health and dissatisfaction with health systems as factors that decrease well-being. Similarly, in a study exploring the significance of happiness in relation to health system efficiency, See and Yen found that health system inefficiency and happiness levels are inversely related. Paediatric health has been identified as an important indicator of happiness throughout life. Sujarwoto et al found that the association between poor childhood health and adult happiness levels was very significant, with a magnitude of 32% between the presence of an emotional, nervous or psychiatric episode in childhood and happiness levels in adulthood. Ettinger et al studied how best to support general child well-being through a community-based participatory research study exploring community-rooted definitions and approaches to child and youth thriving. Study participants identified 104 unique items associated with child thriving, sorted into seven domains: a healthy environment, safety, positive identity and self-worth, caring families and relationships, strong minds and bodies, vibrant communities, and fun and happiness. This brings us to our secondary question. Utilisation of happiness and well-being indicators to assess the needs and impacts of paediatric surgical interventions in LMICs Research on well-being and happiness within the context of surgical interventions in LMICs is sparse, but limited existing research suggests that they are strongly associated. For example, Feeny et al found statistically significant improvements in all assessed well-being categories in a study on the non-monetary benefits of cataract surgery. After surgery, the percentage of patients who reported some level of difficulty with autonomy in mobility, self-care or performing activities decreased from 30% to 10%, and the percentage of patients who self-reported health as ‘poor’ declined from 46% to 6%. The percentage of self-reported mental health as ‘very good’ or ‘excellent’ increased from 6% to 51%, the percentage who reported moderate to high levels of anxiety or depression decreased from 88% to 24%, average emotional well-being scores increased from 39 to 73 (on a 100 point scale), average self-assessed life-satisfaction increased from 5.1 to 7.6 (on a 10 point scale), average hope values increased from 27.2 to 37.5 (on a 100 point scale) and average self-efficacy increased by 4 points. Results in paediatric-specific surgical interventions similarly link well-being and surgery. Ladak et al conducted a study exploring postoperative, health-related quality of life (HRQOL) in children and adolescents with CHD. The study used the PedsQL 4.0 Generic Core Scale to assess domains of physical, emotional, social and school functioning; the PedsQL Cognitive Functioning Scale to explore cognitive functioning; and the PedsQL 3.0 Cardiac Module to assess disease-specific HRQOL. HRQOL was significantly lower in CHD subjects than in their age-matched healthy siblings for all domains, particularly emotional, psychological, physical and school functioning. Similarly, in a study exploring the impact of CLP surgery on adolescent life outcomes, Wydick et al found that children with CLP experience statistically significant losses in indices of speech quality (−1.59σ), physical well-being (0.32σ), academic and cognitive ability (−0.37σ), and social integration (−0.32σ). The results also show that surgical intervention restores social integration and inclusion, speech outcomes are vital for social inclusion, and early surgery produces strong speech outcomes and restores general human flourishing (composite index of all assessment indices). Subjective social status is also directly associated with happiness, according to a study exploring perceived position on community respect and economic ladders and happiness levels measured in birth cohorts from Guatemala, the Philippines and South Africa. Thus, linking surgical intervention and social inclusion also links surgical intervention directly to happiness. Well-being and surgical outcomes appear to have a reciprocal relationship—surgical outcomes impact well-being, and, according to Ladak et al , non-health-related domains of well-being impact surgical outcomes. In a qualitative study using the Social Ecological Model (SEM), Ladak et al explored parental perspectives on the influence of sociocultural and environmental factors on HRQOL of CHD patients. SEM includes intrapersonal and interpersonal, institutional, sociocultural and public policy factors, all of which were found to have a substantial impact on the HRQOL of children following CHD surgery. Thus, understanding and measuring both health-related and non-health-related indicators of well-being are vital to supporting paediatric health outcomes. The link between surgical intervention and well-being also applies to caregivers. Ladak et al found that mothers frequently face detrimental impacts on their social and emotional well-being when serving as the sole primary caregiver to a child with CHD. Evidence suggests surgical intervention can restore well-being levels to caregivers and patients. Feeny et al found significant improvements in all measures of well-being for both patients and caregivers after cataract surgery. Specifically, the percentage of caregivers who self-reported health as ‘very good’ or ‘excellent’ increased from 13% to 45%, the percentage of caregivers who self-reported mental health as ‘very good’ or ‘excellent’ increased from 13% to 57%, emotional well-being scores (on a 100 point scale) increased from 47 to 76, life-satisfaction values increased by 1.7 points on a 10 point scale, average values of hope increased from 33.1 to 39.0 (on a 100 point scale) and finally, self-efficacy increased by 5 points. When considering methodologies for measuring population happiness and well-being, we identified two categories: subjective and objective well-being. Choon et al identified this distinction as ‘inner’ (subjective) versus ‘outer’ (objective) indicators, meaning indicators that relate to an individual’s perceived emotional or physical experience and indicators related to an individual’s environment or physical state, respectively. Many existing indices that measure well-being within global health and development contexts focus primarily on objective indicators. Several prominent non-GDP-based indices and their methodologies are outlined as follows. Bhutan Gross National Happiness The BGNHI is drawn from national survey data and based on four Bhutanese principles: sustainable and equitable economic development, conservation of the environment, preservation and promotion of culture, and good governance. The Gross National Happiness Index (GNHI) was explicitly developed to provide a more holistic alternative to GDP for measuring national well-being and success. Bhutan was the first nation to include happiness as a component of state policy. The index consists of nine equally weighted domains: psychological well-being, health, time use, education, cultural diversity and resilience, good governance, community vitality, ecological diversity and resilience, and living standards. These nine domains consist of 33 clustered indicators with 124 variables of differing weights— objective indicators are given higher weights, while subjective indicators are given lower weights. While some indicators resemble those of other well-being indices (literacy rates, education, etc), the GNHI is unique in that Bhutan’s values and traditions are reflected in several indicators, such as respect for the sacredness of nature. It reflects the Bhutanese philosophy about happiness as more than a feeling or emotional state but a concept rooted in the interconnectedness of living beings. The mathematical structure is based on the Alkire Foster method, where a sufficiency level (rather than deprivation level) is attached to each variable. The GNHI is then calculated as a value between 0 and 1 with one of the two following equations: 1 G N H I = 1 - H n A n 2 G N H I = H h + ( H n ∙ A s ) where H h is the proportion of the population with a sufficiency score greater than or equal to 66%; H n is the proportion of the population with a sufficiency score below 66%; A s is the percentage of domains in which people who are not yet happy experience sufficiency (similar to an ‘intensity’ value); finally, A n is the percentage of domains in which not-yet-happy people lack sufficiency. The GNHI has since been cited in several articles exploring happiness and well-being, serving as a basis for new happiness index development. OECD Better Life Index The OECD BLI emphasises two well-being categories: current and future well-being. The framework for current well-being has four features that guide the dimensions of the index: (1) focus on people, meaning the experience and community relations of individuals and households, rather than the economy; (2) focus on well-being outcomes rather than inputs or outputs, assessed by both objective (non-self-reported) and subjective (self-reported) measures; (3) considers the distribution of well-being outcomes across populations; (4) considers subjective experiences as well as objective assessments of well-being. In total, 11 dimensions measure current well-being, including health status, work-life balance, education and skills, social connections, civic engagement and governance, environmental quality, personal security, subjective well-being, income and wealth, jobs and earnings, and housing. Future well-being is assessed through indicators of different types of capital, such as economic, natural, human and social capital, which drive well-being over time. The index has a three-level hierarchical structure in which level 1 comprises the individual indicators that form the 11 dimensions, level 2 comprises the 11 dimensions and level 3 is the OECD BLI. Each indicator value is normalised via the equation : I x = a c t u a l v a l u e - m i n i m u m v a l u e m a x i m u m v a l u e - m i n i m u m v a l u e where the ‘actual value’ is the country value for the indicator, the ‘minimum value’ is the global minimum for the indicator and the ‘maximum value’ is the global maximum for the indicator. A composite index for the education dimension is obtained by averaging the indices for expected years of schooling and mean years of schooling. If the indicator measures a negative value, the normalisation formula is I x = 1 - a c t u a l v a l u e - m i n i m u m v a l u e m a x i m u m v a l u e - m i n i m u m v a l u e The normalised values for all indicators within a dimension are then averaged with equal weights to obtain a single aggregate dimensional value. However, the OECD has not adopted a singular method of aggregating the 11 dimensions to obtain the total OECD BLI value. Instead, the users of the OECD BLI interface can assign dimensional weights manually. This system reflects an ongoing debate surrounding how best to weight complex multidimensional indices—assigning equal weights incorrectly assumes that each dimension has an equal bearing on well-being, but assigning differential weights manipulates results. Balestra et al conducted a study using OECD BLI website data to identify which dimensions are weighted the highest on average. The results show that health, education and life satisfaction are weighted the highest by users of the OECD BLI. Furthermore, a growing body of literature has explored the development of non-compensatory methods to overcome the compensation effect, meaning success in one indicator compensating for a deficit in another indicator in composite indices. Koronakos et al proposed a Multiple Objective Programming assessment framework for the BLI, incorporating public opinion to create weight restrictions, reducing the compensation effect. Carlsen conducted a study using partial data ordering to address the compensation effect in the World Happiness Index (HI), an index calculated by the arithmetic addition of its seven indicators. In doing so, Carlsen considered all seven indicators simultaneously for 157 countries, leading to a different international HI ranking. Thus, the compensation effect is also a concern for how countries are ranked and compared based on their index value. Human Development Index The HDI, initially designed by Mahbub ul Haq in 1990, has been implemented by the United Nations Development Programme to measure global development. The HDI consists of four indicators—life expectancy, expected years of schooling, mean years of schooling and per capita income, making up three dimensions of the HDI: health, education and income. In the same manner as the OECD BLI, indicator variables are normalised and transformed to a unitless index value between 0 and 1 using the following formula : I x = a c t u a l v a l u e - m i n i m u m v a l u e m a x i m u m v a l u e - m i n i m u m v a l u e For the income dimension, the same equation is used but with the natural logarithm of each variable as follows : I x = ln a c t u a l v a l u e - l n ( m i n i m u m v a l u e ) l n ( m a x i m u m v a l u e ) - l n ( m i n i m u m v a l u e ) The HDI is then obtained by averaging the health, education and income indices as follows : H D I = ( I h e a l t h ⋅ I e d u c a t i o n ⋅ I i n c o m e ) 1 3 Unlike the OECD BLI, equal weights are assigned to each dimension of the HDI, leading to the same criticisms of assuming that each parameter matters equally to well-being and concerns over the compensation effect. The HDI has also been criticised for being redundant with other measures for human development or only limitedly useful. Ranis et al conducted a study in which the HDI was tested for correlation with 39 indicators across 11 broad domains of human development. The HDI was only correlated with 8 of the 39 indicators, suggesting it is not a strong indicator for broad human development. However, when the same test was performed against under-five mortality and per capita income, two of the most common development indicators, the HDI performed equally as well as under-five mortality and better than per capita income as a measure of broad human development. Happy Planet Index The HPI measures sustainable well-being through three domains: life expectancy, experienced well-being (average of individual responses to rank oneself on a ladder of life from 0 to 10) and ecological footprint (the average amount of land needed, per person in the population, to sustain typical consumption patterns). The index is calculated as follows: H P I = α ∙ l i f e e x p e c t a n c y ∙ e x p e r i e n c e d w e l l b e i n g + β - γ e c o l o g i c a l f o o t p r i n t + ε where α =0.75 and γ =54.92, both of which are scaling constants, β =2.92, which ensures the coefficient of variance is equivalent for well-being and life expectancy, and ε =6.39, which ensures that the coefficient of variance for ‘ecological footprint’ is equivalent to that of the ‘happy life years’ measure (life expectancy multiplied by experienced well-being). The BGNHI is drawn from national survey data and based on four Bhutanese principles: sustainable and equitable economic development, conservation of the environment, preservation and promotion of culture, and good governance. The Gross National Happiness Index (GNHI) was explicitly developed to provide a more holistic alternative to GDP for measuring national well-being and success. Bhutan was the first nation to include happiness as a component of state policy. The index consists of nine equally weighted domains: psychological well-being, health, time use, education, cultural diversity and resilience, good governance, community vitality, ecological diversity and resilience, and living standards. These nine domains consist of 33 clustered indicators with 124 variables of differing weights— objective indicators are given higher weights, while subjective indicators are given lower weights. While some indicators resemble those of other well-being indices (literacy rates, education, etc), the GNHI is unique in that Bhutan’s values and traditions are reflected in several indicators, such as respect for the sacredness of nature. It reflects the Bhutanese philosophy about happiness as more than a feeling or emotional state but a concept rooted in the interconnectedness of living beings. The mathematical structure is based on the Alkire Foster method, where a sufficiency level (rather than deprivation level) is attached to each variable. The GNHI is then calculated as a value between 0 and 1 with one of the two following equations: 1 G N H I = 1 - H n A n 2 G N H I = H h + ( H n ∙ A s ) where H h is the proportion of the population with a sufficiency score greater than or equal to 66%; H n is the proportion of the population with a sufficiency score below 66%; A s is the percentage of domains in which people who are not yet happy experience sufficiency (similar to an ‘intensity’ value); finally, A n is the percentage of domains in which not-yet-happy people lack sufficiency. The GNHI has since been cited in several articles exploring happiness and well-being, serving as a basis for new happiness index development. The OECD BLI emphasises two well-being categories: current and future well-being. The framework for current well-being has four features that guide the dimensions of the index: (1) focus on people, meaning the experience and community relations of individuals and households, rather than the economy; (2) focus on well-being outcomes rather than inputs or outputs, assessed by both objective (non-self-reported) and subjective (self-reported) measures; (3) considers the distribution of well-being outcomes across populations; (4) considers subjective experiences as well as objective assessments of well-being. In total, 11 dimensions measure current well-being, including health status, work-life balance, education and skills, social connections, civic engagement and governance, environmental quality, personal security, subjective well-being, income and wealth, jobs and earnings, and housing. Future well-being is assessed through indicators of different types of capital, such as economic, natural, human and social capital, which drive well-being over time. The index has a three-level hierarchical structure in which level 1 comprises the individual indicators that form the 11 dimensions, level 2 comprises the 11 dimensions and level 3 is the OECD BLI. Each indicator value is normalised via the equation : I x = a c t u a l v a l u e - m i n i m u m v a l u e m a x i m u m v a l u e - m i n i m u m v a l u e where the ‘actual value’ is the country value for the indicator, the ‘minimum value’ is the global minimum for the indicator and the ‘maximum value’ is the global maximum for the indicator. A composite index for the education dimension is obtained by averaging the indices for expected years of schooling and mean years of schooling. If the indicator measures a negative value, the normalisation formula is I x = 1 - a c t u a l v a l u e - m i n i m u m v a l u e m a x i m u m v a l u e - m i n i m u m v a l u e The normalised values for all indicators within a dimension are then averaged with equal weights to obtain a single aggregate dimensional value. However, the OECD has not adopted a singular method of aggregating the 11 dimensions to obtain the total OECD BLI value. Instead, the users of the OECD BLI interface can assign dimensional weights manually. This system reflects an ongoing debate surrounding how best to weight complex multidimensional indices—assigning equal weights incorrectly assumes that each dimension has an equal bearing on well-being, but assigning differential weights manipulates results. Balestra et al conducted a study using OECD BLI website data to identify which dimensions are weighted the highest on average. The results show that health, education and life satisfaction are weighted the highest by users of the OECD BLI. Furthermore, a growing body of literature has explored the development of non-compensatory methods to overcome the compensation effect, meaning success in one indicator compensating for a deficit in another indicator in composite indices. Koronakos et al proposed a Multiple Objective Programming assessment framework for the BLI, incorporating public opinion to create weight restrictions, reducing the compensation effect. Carlsen conducted a study using partial data ordering to address the compensation effect in the World Happiness Index (HI), an index calculated by the arithmetic addition of its seven indicators. In doing so, Carlsen considered all seven indicators simultaneously for 157 countries, leading to a different international HI ranking. Thus, the compensation effect is also a concern for how countries are ranked and compared based on their index value. The HDI, initially designed by Mahbub ul Haq in 1990, has been implemented by the United Nations Development Programme to measure global development. The HDI consists of four indicators—life expectancy, expected years of schooling, mean years of schooling and per capita income, making up three dimensions of the HDI: health, education and income. In the same manner as the OECD BLI, indicator variables are normalised and transformed to a unitless index value between 0 and 1 using the following formula : I x = a c t u a l v a l u e - m i n i m u m v a l u e m a x i m u m v a l u e - m i n i m u m v a l u e For the income dimension, the same equation is used but with the natural logarithm of each variable as follows : I x = ln a c t u a l v a l u e - l n ( m i n i m u m v a l u e ) l n ( m a x i m u m v a l u e ) - l n ( m i n i m u m v a l u e ) The HDI is then obtained by averaging the health, education and income indices as follows : H D I = ( I h e a l t h ⋅ I e d u c a t i o n ⋅ I i n c o m e ) 1 3 Unlike the OECD BLI, equal weights are assigned to each dimension of the HDI, leading to the same criticisms of assuming that each parameter matters equally to well-being and concerns over the compensation effect. The HDI has also been criticised for being redundant with other measures for human development or only limitedly useful. Ranis et al conducted a study in which the HDI was tested for correlation with 39 indicators across 11 broad domains of human development. The HDI was only correlated with 8 of the 39 indicators, suggesting it is not a strong indicator for broad human development. However, when the same test was performed against under-five mortality and per capita income, two of the most common development indicators, the HDI performed equally as well as under-five mortality and better than per capita income as a measure of broad human development. The HPI measures sustainable well-being through three domains: life expectancy, experienced well-being (average of individual responses to rank oneself on a ladder of life from 0 to 10) and ecological footprint (the average amount of land needed, per person in the population, to sustain typical consumption patterns). The index is calculated as follows: H P I = α ∙ l i f e e x p e c t a n c y ∙ e x p e r i e n c e d w e l l b e i n g + β - γ e c o l o g i c a l f o o t p r i n t + ε where α =0.75 and γ =54.92, both of which are scaling constants, β =2.92, which ensures the coefficient of variance is equivalent for well-being and life expectancy, and ε =6.39, which ensures that the coefficient of variance for ‘ecological footprint’ is equivalent to that of the ‘happy life years’ measure (life expectancy multiplied by experienced well-being). A growing body of literature emphasises that subjective well-being, meaning well-being as it is identified by the individual and not by ‘objective’ data, is heavily influenced by health status in childhood and throughout life. Arguments have been made for broadening standard population-level health indicators beyond morbidity and mortality to include a third indicator encompassing biological health and ‘lived health’. Stucki and Bickenbach referred to this third indicator as ‘functioning’, intended to capture an individual’s capacity and performance with respect to any physical limitations or health conditions. Thus, health is relevant in subjective well-being, and to capture a holistic measure of population health, a more subjective indicator of ‘lived health’ may be necessary. Subjective well-being is incorporated into existing non-GDP-based well-being indices to varying degrees. The GNHI and the OECD BLI are the most inclusive of subjective measures. However, the GNHI assigns lower weights to indicators with higher levels of subjectivity, and the BLI does not provide any guidance on dimensional weights. The HPI has an ‘experienced well-being’ category. Still, it comprises only one subjective indicator—a‘ladder of life’, which has been shown to possess good convergent validity with other emotional well-being measures, specifically in children. However, indices specific to subjective well-being typically lack objective measures, resulting in a similar loss of holistic measurement. Additionally, the validity of cross-cultural and cross-national comparisons when relying solely on subjective well-being is controversial. The Pemberton Happiness Index (PHI), developed by Hervás and Vásquez, is a subjective well-being-based index tested across multiple regions to validate its consistency across geographic and cultural boundaries. The index was developed to capture both remembered well-being (a retrospective, memory-based assessment) and experienced well-being (a momentary assessment of the active state of well-being). The final structure consists of 11 items to capture general, eudaimonic, hedonic and social domains of remembered well-being and 10 items to capture experienced well-being. The index value is obtained by adding the scores (0–10 scale) from the 11 remembered well-being items with the score for experienced well-being (sum of positive items experienced and negative items not experienced) and then dividing by 12. However, the PHI was not designed to be a national index or a tool in development contexts; instead, it was constructed based on existing indices and measurements for clinical contexts. It does not include any ‘objective’ indicators necessary for creating a holistic index for global health and development needs assessments. Choon et al developed an integrated happiness framework for sustainable development based on existing non-GDP-based happiness and well-being indices and the Positive Emotion, Engagement, Relationships, Meaning and Accomplishment (PERMA) psychology model. It consists of eight outer dimensions (environment, education, governance, culture, community, health, safety and economics) based on the GNHI, the OECD BLI and the Malaysia Happiness Index (there was not sufficient literature or empirical evidence available to warrant inclusion of the Malaysia Happiness Index in our study), and five inner dimensions (positive emotion, engagement, relationships, meaning and accomplishments), based on the PERMA model. To test the model, a questionnaire was created with four sections: happiness and value of life, external environment, positive psychology and demographics. Each dimension of the model consists of three questions, and each question, excluding those of the demographics section, is scored on a 7-point Likert scale. The same normalisation method the OECD BLI and the HDI uses is applied to convert the indicators into indices. Bridging the gap between the objective and subjective has been the focus of an entire branch of literature. van Praag et al assert that subjective well-being or ‘self-reported satisfaction’, is a key tool for developing and assessing socioeconomic policy, claiming that subjective questions and responses may be used as proxies for individual satisfaction, and general domain satisfaction is explainable by objective variables. In other words, rather than treating subjective and objective as two different categories, van Praag et al interlinked them, using a model where general satisfaction, GS, consists of six domains of satisfaction, DS 1 …DS J , which is dependent on observable, objective variables, x . Salameh et al used an ordered logit and tobit model to identify socioeconomic determinants of subjective well-being and found that income, education, government effectiveness, no perceived corruption and perceived institutional quality improve well-being, while lower trust in family and friends, poor health status, living in rented housing and dissatisfaction with hospital services are negatively associated with subjective well-being. Sujarwoto et al conducted a three-level logit regression study to explore a multilevel data structure with individual, household and district data for Indonesia. The findings show that happiness and life satisfaction are significantly associated with household and district government-level and individual factors. Thus, well-being comprises a complex combination of objective and subjective factors spread across individual, household and institutional domains. In nearly all included studies, health is highlighted as a key indicator for happiness and well-being. For example, Salameh et al identified poor individual-level health and dissatisfaction with health systems as factors that decrease well-being. Similarly, in a study exploring the significance of happiness in relation to health system efficiency, See and Yen found that health system inefficiency and happiness levels are inversely related. Paediatric health has been identified as an important indicator of happiness throughout life. Sujarwoto et al found that the association between poor childhood health and adult happiness levels was very significant, with a magnitude of 32% between the presence of an emotional, nervous or psychiatric episode in childhood and happiness levels in adulthood. Ettinger et al studied how best to support general child well-being through a community-based participatory research study exploring community-rooted definitions and approaches to child and youth thriving. Study participants identified 104 unique items associated with child thriving, sorted into seven domains: a healthy environment, safety, positive identity and self-worth, caring families and relationships, strong minds and bodies, vibrant communities, and fun and happiness. This brings us to our secondary question. Research on well-being and happiness within the context of surgical interventions in LMICs is sparse, but limited existing research suggests that they are strongly associated. For example, Feeny et al found statistically significant improvements in all assessed well-being categories in a study on the non-monetary benefits of cataract surgery. After surgery, the percentage of patients who reported some level of difficulty with autonomy in mobility, self-care or performing activities decreased from 30% to 10%, and the percentage of patients who self-reported health as ‘poor’ declined from 46% to 6%. The percentage of self-reported mental health as ‘very good’ or ‘excellent’ increased from 6% to 51%, the percentage who reported moderate to high levels of anxiety or depression decreased from 88% to 24%, average emotional well-being scores increased from 39 to 73 (on a 100 point scale), average self-assessed life-satisfaction increased from 5.1 to 7.6 (on a 10 point scale), average hope values increased from 27.2 to 37.5 (on a 100 point scale) and average self-efficacy increased by 4 points. Results in paediatric-specific surgical interventions similarly link well-being and surgery. Ladak et al conducted a study exploring postoperative, health-related quality of life (HRQOL) in children and adolescents with CHD. The study used the PedsQL 4.0 Generic Core Scale to assess domains of physical, emotional, social and school functioning; the PedsQL Cognitive Functioning Scale to explore cognitive functioning; and the PedsQL 3.0 Cardiac Module to assess disease-specific HRQOL. HRQOL was significantly lower in CHD subjects than in their age-matched healthy siblings for all domains, particularly emotional, psychological, physical and school functioning. Similarly, in a study exploring the impact of CLP surgery on adolescent life outcomes, Wydick et al found that children with CLP experience statistically significant losses in indices of speech quality (−1.59σ), physical well-being (0.32σ), academic and cognitive ability (−0.37σ), and social integration (−0.32σ). The results also show that surgical intervention restores social integration and inclusion, speech outcomes are vital for social inclusion, and early surgery produces strong speech outcomes and restores general human flourishing (composite index of all assessment indices). Subjective social status is also directly associated with happiness, according to a study exploring perceived position on community respect and economic ladders and happiness levels measured in birth cohorts from Guatemala, the Philippines and South Africa. Thus, linking surgical intervention and social inclusion also links surgical intervention directly to happiness. Well-being and surgical outcomes appear to have a reciprocal relationship—surgical outcomes impact well-being, and, according to Ladak et al , non-health-related domains of well-being impact surgical outcomes. In a qualitative study using the Social Ecological Model (SEM), Ladak et al explored parental perspectives on the influence of sociocultural and environmental factors on HRQOL of CHD patients. SEM includes intrapersonal and interpersonal, institutional, sociocultural and public policy factors, all of which were found to have a substantial impact on the HRQOL of children following CHD surgery. Thus, understanding and measuring both health-related and non-health-related indicators of well-being are vital to supporting paediatric health outcomes. The link between surgical intervention and well-being also applies to caregivers. Ladak et al found that mothers frequently face detrimental impacts on their social and emotional well-being when serving as the sole primary caregiver to a child with CHD. Evidence suggests surgical intervention can restore well-being levels to caregivers and patients. Feeny et al found significant improvements in all measures of well-being for both patients and caregivers after cataract surgery. Specifically, the percentage of caregivers who self-reported health as ‘very good’ or ‘excellent’ increased from 13% to 45%, the percentage of caregivers who self-reported mental health as ‘very good’ or ‘excellent’ increased from 13% to 57%, emotional well-being scores (on a 100 point scale) increased from 47 to 76, life-satisfaction values increased by 1.7 points on a 10 point scale, average values of hope increased from 33.1 to 39.0 (on a 100 point scale) and finally, self-efficacy increased by 5 points. This scoping review explores existing happiness and well-being indices methodologies and assesses their application to paediatric surgical interventions in LMICs. A key strength of the review is its ability to identify research gaps, offering valuable guidance for future studies. By exploring a wide range of studies, the review captures the multidimensional nature of well-being, showcasing different perspectives that can inform the development of standardised approaches for assessing paediatric surgical needs and addressing broader health disparities. This is particularly relevant for clinicians and stakeholders seeking to improve healthcare equity and understand the impact of surgical interventions on well-being in LMICs. In terms of clinical utility, the findings of this review can guide paediatric surgeons and healthcare providers by emphasising the importance of integrating well-being assessments into clinical practice. Standardising the measurement of happiness and well-being in paediatric surgery could help surgeons assess immediate surgical outcomes and gauge the broader impact of surgery on a child’s long-term quality of life. This could facilitate more holistic care planning, tailoring interventions to not only address physical outcomes but also enhance the emotional and psychological well-being of patients. For example, understanding the cultural and social dimensions of well-being in LMICs can help paediatric surgeons improve communication with families and align treatment goals with the broader needs of the child. However, this review has some limitations. As a scoping review, it does not evaluate the quality of the included studies or synthesise evidence into conclusive statements. The wide range of methodologies examined also results in inconsistencies in study design, sample sizes and measurement tools, which may limit the reliability of conclusions. The scope of the review was confined to research articles, review articles and technical reports in English, potentially excluding valuable insights from non-English sources. Furthermore, the inclusion of diverse methodologies for measuring happiness and well-being may result in high heterogeneity, making it challenging to draw consistent conclusions or recommend standard practices for LMICs and paediatric surgical settings. Finally, while this review emphasises quantitative methodologies, excluding qualitative studies may limit the depth of insights into subjective well-being. This limitation underscores a tension between the study’s focus on measurable, quantitative indices and its conclusion advocating for a more comprehensive integration of subjective well-being in clinical practice. Future studies should, therefore, consider incorporating qualitative methodologies to enrich well-being assessments, offering a more holistic view that aligns with the ultimate goal of patient-centred care. Despite these limitations, this broad review identifies gaps in current research and highlights the need for more standardised, holistic measures that include subjective and objective indicators. By synthesising a diverse range of studies, the review offers valuable groundwork and direction for future research, emphasising areas where more focussed, outcome-driven studies could improve well-being assessment and enhance global health interventions in LMICs. The review highlights the urgent need for improved, standardised methodologies to assess well-being and happiness in paediatric surgical interventions. The lack of consistency makes it difficult to draw definitive conclusions or compare results across studies. While some research links surgical interventions to improvements in well-being, measuring these outcomes alongside surgical interventions is not yet a standard practice in paediatric surgery. The adoption of well-being metrics in clinical settings could provide paediatric surgeons with valuable insights into patient recovery and long-term quality of life, allowing for more comprehensive postsurgical care that addresses both physical and emotional outcomes. Moreover, many existing methodologies lack the spatial and temporal precision required to determine where interventions are most needed or to assess their long-term effects, complicating efforts to deliver targeted healthcare. Additionally, most well-being indices focus too heavily on either objective or subjective measures, failing to capture the full range of experiences faced by children undergoing surgical interventions. The lack of cultural adaptability in many of these methodologies further limits their effectiveness in LMIC settings, where cultural differences may influence perceptions of well-being and happiness. This could lead to skewed data and less effective clinical interventions. There is a pressing need to develop robust, integrated and culturally sensitive methodologies that can more effectively assess well-being outcomes following surgical interventions to address these significant gaps. Such methods would improve the understanding of the impacts of healthcare interventions, enabling paediatric surgeons to make better-informed decisions and promote more equitable healthcare delivery. By incorporating well-being assessments into everyday clinical practice, paediatric surgeons can offer more holistic care that improves physical outcomes and enhances the overall quality of life for children undergoing surgery in LMICs. This review sets the stage for future research and calls for concerted efforts to bridge these gaps, reduce disparities and enhance the well-being of children following surgical interventions in LMICs. 10.1136/bmjopen-2024-089703 online supplemental file 1 10.1136/bmjopen-2024-089703 online supplemental file 2 |
The Landscape of Risk Communication Research: A Scientometric Analysis | 06c18a36-26d2-4907-a11a-716710acfbd1 | 7246897 | Preventive Medicine[mh] | Risk communication is an essential aspect of risk management and governance. In the ISO 31000 standard for organizational risk management by the International Organization for Standardization , risk communication is part of the ‘communication and consultation’ activity of the risk management process, with the primary aims to promote awareness and understanding of risks. Although not without its critics and notwithstanding ongoing work to define its conceptual basis , this standard is very influential across industrial domains as a platform for sharing best practices and as a catalyst for professionalization of risk management . Through its provisions, the need for risk communication is highlighted, for instance, in supply chain risk , maritime oil spill preparedness and response , firefighting , and mining safety . In the influential risk governance framework by the International Risk Governance Council , risk communication aims to ensure consideration of a plurality of values and interests in order to enable acceptance and social license of risk management strategies by societal actors. The IRGC’s framework has been applied in contexts such as food health and safety , drinking water quality , offshore oil , and autonomous vessels . Within a risk governance context, several authors have described evolutions and trends in risk communication . In general, these authors find that the focus in the early era was on explaining technical aspects of risk assessment, whereas more recent approaches focus on two-way communication with consideration of public concerns and risk perceptions, which is achieved through stakeholder involvement strategies. Given the wide range of application domains where the importance of effective risk communication has been recognized, it is not surprising that risk communication has been an active research area. To provide summary insights into this increasingly extensive body of literature, several review articles have been published. Some of these focus on generic aspects of risk communication, such as its functions and associated problems , the assessment of effectiveness of communication interventions , the use of probabilistic information or maps , or ethical aspects , while other reviews summarize the literature on the communication of specific risks such as vaccine-related risks , cancer screening , public health emergencies , and natural disasters . While these reviews provide insights into specific approaches to risk communication, in some key aspects, or in research related to specific risks, there currently is no high-level analysis of the risk communication research domain. Scientometric analysis methods and associated visualization techniques enable obtaining insights into structural developments of a research domain, including temporal and geographical trends in outputs and focus topics, collaboration networks, scientific categories and thematic clusters, and co-citation networks and associated research fronts . While narrative reviews are more suited to obtain focused insights into narrower issues , scientometric analyses have been developed to obtain knowledge about the high-level structure and dynamics of a research domain, using quantitative metrics and mathematical techniques . Such high-level knowledge is primarily useful for researchers engaged in a given research domain to better understand its scope, nature, and development trends; its focus topics and themes; and key documents and authors. This is especially useful for early career researchers who are relatively new to the domain, but can also be helpful for experienced academics, for instance, in preparing lecture materials, or for journal editors to focus research attention on hot topics, e.g., by opening special issues on a specific theme. Several scientometric analyses have been published on risk, safety, health, and environment-related topics. These include an analysis of safety culture , road safety , resilient health care , sustainability and sustainable development , disaster risk management , slip and falls at worksites , electronic cigarettes , health and young people’s social networks , and process safety in China . In light of this, the aim of this article is to present a scientometric analysis of the risk communication research domain. The specific research questions are as follows: RQ1. What are the overall publication trends in terms of publication output? RQ2. What geographic trends can be observed at a country level? RQ3. What scientific categories are strongly represented? RQ4. What journals are dominant knowledge carriers and what knowledge do these draw on? RQ5. What are the dominant narrative topics, and what is their temporal evolution? RQ6. What is the evolution in research clusters, associated research fronts, and key documents? The remainder of this article is organized as follows. In , the document search strategy, data retrieval process, and resulting dataset are described, followed by a brief overview of the scientometric techniques and tools to answer the above research questions. The results and their interpretation are presented in . In , a discussion is given, contextualizing the work and providing directions for future research. concludes. There are four main steps in a typical scientometric analysis: formulating questions, data retrieval, application of suitable scientometric methods and tools, and interpretation of the results. The data retrieval strategy and resulting dataset is described in , and the scientometric methods and tools are briefly introduced in . 2.1. Data Retrieval Strategy and Resulting Dataset The world’s largest and most comprehensive database of scientific publications, Web of Science Core Collections (WOSCC) was applied in this study to retrieve a high-quality dataset. Compared to other popular databases such as Scopus, SciFinder, or Google Scholar, WOSCC is the most comprehensive one across scientific disciplines, while also having a very high data quality . The following search strategy was applied in the WOSCC database on 13 March 2020: Title = “risk communication” AND Document type = NOT (correction OR early access) A title-based search strategy was applied in order to ensure that identified documents indeed focus on risk communication. A prior exploratory search based on title, abstract, and keywords led to a much larger dataset of over 5500 documents. Of these, many are however not directly relevant to obtaining insights into the risk communication research domain but instead mention risk communication more tangentially while focusing on risk perception, stakeholder participation, or other aspects of risk management or governance. With the applied title-based search process, all document types are retained in the resulting dataset, except articles presenting authors’ corrections to earlier publications and early access articles, i.e., articles which were not yet in final print at the time of the search. The timespan covered in the search ranges from 1900 until 2019 (inclusive). The resulting dataset contained 1196 articles, which can be considered as the core scientific body of literature on risk communication. contains some key descriptive information of this dataset, obtained through the R package Bibliometrix . The results, which partially answer RQ1, show that risk communication research spans from 1985 to 2019, with 523 different journals contributing to the domain’s literature. In total, 3137 authors have (co-)authored at least one document, with only 296 authors having contributed a single-authored document. By far, most of the work is the result of multi-author collaboration, as can be seen from the high collaboration index of 3.39 and the average number of co-authors per document. The average number of citations per document is 14.82, which is relatively high. This indicates that risk communication research is quite impactful in the academic community. 2.2. Applied Scientometric Methods: Techniques and Tools Various scientometrics methods were applied to answer the research questions listed in . Scientometric analysis involves the application of quantitative methods for detecting trends, patterns, and developments of a scientific research domain . By visualizing quantitative metrics which represent informational aspects of the research domain, insights are obtained into its scope, contents, and development . contains an overview of the techniques and tools used to answer the research questions in this study. These are briefly described below. Trends in research outputs (RQ1) are basic scientometric indicators, providing insights into the development of research activity over time. Apart from a simple count of publications per year, a regression analysis was performed to estimate the rate of change. Other basic trends of the publications in the research domain were determined by elementary summary statistics, using Bibliometrix software . The geographic patterns (RQ2) were identified by counting the number of articles originating from the different countries/regions in the world. For each country-related subset of the data, additional metrics were calculated to provide insight in the temporal activity of different geographical areas and to assess the average impact of publications from the areas. Bibliometrix software was used for these basic calculations . To identify collaboration networks between countries/regions in risk communication research, the visualization of similarities mapping technique was applied . This technique quantitatively analyzes similarities between documents according to a chosen data object, in this case country/region of origin. The VOSviewer software determines citation networks in which the distance between nodes shows the level of closeness to each other and the node size represents the number of documents . Insights into the scientific categories represented in risk communication research (RQ3) were obtained by mapping the journal categories on the global science map . This map shows clusters of different scientific disciplines, providing a high-level visual overview of the complete scientific body of knowledge. Mapping the journal categories associated with the risk communication publications of the obtained dataset provides insights into what scientific domains actively contribute to the development of knowledge in this research area. The analysis and visualization were done with VOSviewer . The information flow to and between journals as knowledge carriers in risk communication research (RQ4) was analyzed using the dual-map overlay . This map shows the interconnections between over 10,000 journals, where these journals are grouped in regions representing publication and citation activities at the domain level . The dual-map overlay enables insights into how specific domains of knowledge (citing articles) are influenced by other domains (cited literature), where the latter can be regarded as the intellectual base of the knowledge domain in focus . The dominant narrative patterns in the risk communication domain (RQ5) were identified using the automatic term identification method to extract terms or noun phrases from the bibliographic data about the documents in the dataset. In the present work, terms are extracted from title, abstract, and keywords. A data cleaning process was applied to combine similar terms in the resulting term list. VOSviewer was applied to cluster the terms, to determine associated heat maps, and to obtain additional bibliometric indicators such as the average publication year and average impact of terms. This information provides insights into trending topics over time and helps to determine topics that are scientifically fruitful. The evolution of research clusters, research fronts, and key documents (RQ6) was performed through a co-citation analysis using CiteSpace software . Co-citation analysis was first proposed by Small as a method to measure the relationship between two documents. Two documents are co-cited when they appear together in an article’s reference list. Resting on the premise that articles focusing on similar themes will cite partially the same articles, co-citation information in a set of documents provides high-level insights into the similarities between documents, from which research clusters can be identified. Recognizing that cited references can be considered indicative of the intellectual basis of a given area of research, the highly cited articles in these clusters can be considered key documents driving a domain of scientific work . Furthermore, the articles citing most references from a given co-citation cluster are known as research fronts. In scientometrics research, these research fronts are considered to be the figureheads of a research cluster, providing insight in a subdomain of academic focus . The world’s largest and most comprehensive database of scientific publications, Web of Science Core Collections (WOSCC) was applied in this study to retrieve a high-quality dataset. Compared to other popular databases such as Scopus, SciFinder, or Google Scholar, WOSCC is the most comprehensive one across scientific disciplines, while also having a very high data quality . The following search strategy was applied in the WOSCC database on 13 March 2020: Title = “risk communication” AND Document type = NOT (correction OR early access) A title-based search strategy was applied in order to ensure that identified documents indeed focus on risk communication. A prior exploratory search based on title, abstract, and keywords led to a much larger dataset of over 5500 documents. Of these, many are however not directly relevant to obtaining insights into the risk communication research domain but instead mention risk communication more tangentially while focusing on risk perception, stakeholder participation, or other aspects of risk management or governance. With the applied title-based search process, all document types are retained in the resulting dataset, except articles presenting authors’ corrections to earlier publications and early access articles, i.e., articles which were not yet in final print at the time of the search. The timespan covered in the search ranges from 1900 until 2019 (inclusive). The resulting dataset contained 1196 articles, which can be considered as the core scientific body of literature on risk communication. contains some key descriptive information of this dataset, obtained through the R package Bibliometrix . The results, which partially answer RQ1, show that risk communication research spans from 1985 to 2019, with 523 different journals contributing to the domain’s literature. In total, 3137 authors have (co-)authored at least one document, with only 296 authors having contributed a single-authored document. By far, most of the work is the result of multi-author collaboration, as can be seen from the high collaboration index of 3.39 and the average number of co-authors per document. The average number of citations per document is 14.82, which is relatively high. This indicates that risk communication research is quite impactful in the academic community. Various scientometrics methods were applied to answer the research questions listed in . Scientometric analysis involves the application of quantitative methods for detecting trends, patterns, and developments of a scientific research domain . By visualizing quantitative metrics which represent informational aspects of the research domain, insights are obtained into its scope, contents, and development . contains an overview of the techniques and tools used to answer the research questions in this study. These are briefly described below. Trends in research outputs (RQ1) are basic scientometric indicators, providing insights into the development of research activity over time. Apart from a simple count of publications per year, a regression analysis was performed to estimate the rate of change. Other basic trends of the publications in the research domain were determined by elementary summary statistics, using Bibliometrix software . The geographic patterns (RQ2) were identified by counting the number of articles originating from the different countries/regions in the world. For each country-related subset of the data, additional metrics were calculated to provide insight in the temporal activity of different geographical areas and to assess the average impact of publications from the areas. Bibliometrix software was used for these basic calculations . To identify collaboration networks between countries/regions in risk communication research, the visualization of similarities mapping technique was applied . This technique quantitatively analyzes similarities between documents according to a chosen data object, in this case country/region of origin. The VOSviewer software determines citation networks in which the distance between nodes shows the level of closeness to each other and the node size represents the number of documents . Insights into the scientific categories represented in risk communication research (RQ3) were obtained by mapping the journal categories on the global science map . This map shows clusters of different scientific disciplines, providing a high-level visual overview of the complete scientific body of knowledge. Mapping the journal categories associated with the risk communication publications of the obtained dataset provides insights into what scientific domains actively contribute to the development of knowledge in this research area. The analysis and visualization were done with VOSviewer . The information flow to and between journals as knowledge carriers in risk communication research (RQ4) was analyzed using the dual-map overlay . This map shows the interconnections between over 10,000 journals, where these journals are grouped in regions representing publication and citation activities at the domain level . The dual-map overlay enables insights into how specific domains of knowledge (citing articles) are influenced by other domains (cited literature), where the latter can be regarded as the intellectual base of the knowledge domain in focus . The dominant narrative patterns in the risk communication domain (RQ5) were identified using the automatic term identification method to extract terms or noun phrases from the bibliographic data about the documents in the dataset. In the present work, terms are extracted from title, abstract, and keywords. A data cleaning process was applied to combine similar terms in the resulting term list. VOSviewer was applied to cluster the terms, to determine associated heat maps, and to obtain additional bibliometric indicators such as the average publication year and average impact of terms. This information provides insights into trending topics over time and helps to determine topics that are scientifically fruitful. The evolution of research clusters, research fronts, and key documents (RQ6) was performed through a co-citation analysis using CiteSpace software . Co-citation analysis was first proposed by Small as a method to measure the relationship between two documents. Two documents are co-cited when they appear together in an article’s reference list. Resting on the premise that articles focusing on similar themes will cite partially the same articles, co-citation information in a set of documents provides high-level insights into the similarities between documents, from which research clusters can be identified. Recognizing that cited references can be considered indicative of the intellectual basis of a given area of research, the highly cited articles in these clusters can be considered key documents driving a domain of scientific work . Furthermore, the articles citing most references from a given co-citation cluster are known as research fronts. In scientometrics research, these research fronts are considered to be the figureheads of a research cluster, providing insight in a subdomain of academic focus . In this section, the results of the various scientific analyses are shown and interpreted. Each subsection presents the analysis results to answer research questions RQ1 to RQ6. 3.1. Temporal Distribution The annual trend of publication activity in the risk communication research domain is shown in . The first article was published in 1985, entitled “A Nonadvocate Model for Health Risk Communications”, authored by Petcovic and Johnson . This indicates that risk communication research originates from a practical need to inform patients about health risks. The global trend of annual number of publications and the associated cumulative number shows an exponential increase. After a period with only a handful of publications annually at the initial stage of development of the research domain in the mid-1980s, a relatively steady stream of about 15 articles per year was published between about 1990 and 2000. From then onwards, the number of publications escalated quickly, with an increase to over 70 articles published annually after 2015. The research volume before 1990 amounts to 2.9% of the total, with the relative share of the period of 1990–1999 rising to 12.0%, further increasing to 29.3% in the period of 2000–2009, and finally reaching 55.8% in the period after 2010. This shows that risk communication research has experienced a rather dramatic increase in research productivity since its inception. 3.2. Geographical Distribution shows the geographic distribution of risk communication research globally. It is seen that, in total, 63 countries/regions have contributed to the 1196 documents comprising the dataset obtained in . The most productive countries, defined here as those with more than five publications, are listed in . For these countries, additional metrics including the average publication year and the average number of citations are determined as well. It is seen that the vast majority of risk communication research originates from Western countries, with the United States of America (502 articles, 41.9%), the United Kingdom (177, 14.8%), Germany (93, 7.8%), the Netherlands (68, 5.7%), and Canada (58, 4.8%) comprising the top five most productive countries. The dominance of North America and Western Europe in research productivity is striking, while the research activity in Oceania, Asia, Eastern Europe, South America, and Africa is much lower. Australia and Japan are the only countries outside North America or Europe in the top 10. Within Europe, by far, most of the work originates from the United Kingdom, Germany, and the Netherlands, with Italy, Sweden, France, Norway, and Spain also contributing moderately. Eastern Europe is very poorly represented in risk communication research. In Asia, the research is most developed in the Far East, including Japan, the People’s Republic of China, and South Korea. Despite the lower productivity in absolute terms, it is found that some countries in the list of , such as the People’s Republic of China and South Korea, have only relatively recently become active in this research domain. The top five most productive countries have been active for a much longer time, as seen from their comparatively low average year of publication. In terms of impact, the top highly productive countries also generally contribute the most impactful research. As is seen from the average number of citations, research originating from the USA, UK, Canada, and the Netherlands has attracted most citations on average, while work from some less productive countries including Switzerland, Israel, and Belgium also ranks relatively highly. The scientific impact of other countries is in general rather low, with average citation rates of around 5. This underscores the dominance of North America and Western Europe in the risk communication research domain. The country collaboration network, shown in , shows that the most active countries in North American and Western Europe, the United States of America and the United Kingdom, are also the ones with most international collaborations. Transatlantic collaboration is strongest between the USA and the UK, but Germany and the Netherlands also have such links. While the USA has the strongest academic links with Canada, Australia, Japan, China, and South Korea, the UK has stronger links with other European countries. 3.3. Scientific Categories Each journal in the Web of Science Core Collection is classified according to different scientific categories. This categorization serves as a marker of the scientific disciplines and domains with which the journals are concerned. Aggregating these categorizations over the complete dataset obtained in provides insights into how the risk communication research domain situates in the entire body of scientific knowledge. The distribution of scientific categories associated with risk communication is shown on the global science map using the VOSviewer software . The results are shown in , where the global scientific categories are grouped in five clusters. These are #1 ‘ Biology and Medicine ’, #2 ‘ Chemistry and Physics ’, #3 ‘ Ecology and Environmental Science and Technology ’, #4 ‘ Engineering and Mathematics ’, and #5 ‘ Psychology and Social Sciences ’. provides an overview of the most frequently occurring scientific categories in risk communication research, here defined as categories in which at least 20 articles are classified. Furthermore, the average publication year and average number of citations of these categories are shown, providing insight in the temporal evolution of and the scientific impact associated with these categories. The table also indicates which cluster of the scientific category is located in, for easier interpretation of the figure. The results indicate that risk communication research is primarily located in the ‘ Psychology and Social Sciences ’ scientific domain (cluster #5). Within that cluster, the scientific categories ‘ Public, Environmental, and Occupational Health ’ (362 articles, 30.3% of the total dataset), ‘ Social Sciences, Mathematical Methods ’ (89, 7.4%), ‘ Social Sciences, Interdisciplinary ’ (86, 7.2%), ‘ Communication ’ (67, 5.6%), and ‘ Psychology, Multidisciplinary ’ (42, 3.5%) are the most actively contributing. The second most prevalent scientific domain is ‘ Biology and Medicine ’ (cluster #1), in which the scientific categories ‘ Medicine, General and Internal ’ (67, 5.6%), ‘ Toxicology ’ (65, 5.4%), ‘ Pharmacology and Pharmacy ’ (64, 5.3%), ‘ Oncology ’ (42, 3.5%), and ‘ Food Science and Technology ’ (39, 3.3%) are the highest contributors. The third most significantly contributing scientific domain is ‘ Ecology and Environmental Science and Technology ’ (cluster #3). Here, the scientific categories ‘ Environmental Sciences ’ (105, 8.8%), ‘ Water Resources ’ (41, 3.4%), ‘ Meteorology and Atmospheric Sciences ’ (34, 2.8%), and ‘ Geosciences, Multidisciplinary ’ (24, 2.0%) are highly contributing scientific categories. The scientific domains ‘ Engineering and Mathematics ’ (cluster #4) and ‘Chemistry and Physics’ (cluster #2) are contribute significantly less to the risk communication research domain, with only ‘ Mathematics, Interdisciplinary Application ’ (88, 7.4%), ‘ Nuclear Science and Technology ’ (28, 2.3%), and ‘ Engineering, Civil ’ (21, 1.8%) being highly contributing scientific categories. Apart from highlighting the main contributing scientific categories, the visualization of risk communication research on the global science map in also indicates that this research domain is highly interdisciplinary. While the research domain appears to have a very application-oriented focus, especially on health and environmental risks, its scientific foundations lie in social sciences. Furthermore, mathematical methods and their interdisciplinary application in social sciences also are an important aspect in the research domain. While there are some generic scientific categories of the social sciences represented, e.g., ‘ Social Sciences, Interdisciplinary ’ and ‘ Psychology, Multidisciplinary ’, the only significantly contributing specific communications-oriented social science categories with specific relevance to the domain’s conceptual basis are ‘ Communication ’ and ‘ Information Science and Library Science ’. This shows that most work in the risk communication domain originates from practical needs in specific risk management and governance contexts, rather than as a subdiscipline from communications research. To further support the finding that risk communication is highly interdisciplinary, the Stirling-Rao diversity index is calculated. This metric measures the aggregate distance between connected scientific categories, giving more weight to connected article pairs associated with more distant categories . For the risk communication research domain, the global diversity index is 0.803, which is a very high score. This indicates that there is a high diversity in scientific categories concerned with this domain, and that these collectively contribute to the knowledge production. Focusing on , the average year in which articles in a category are published shows that the oldest categories are ‘ Social Sciences, Mathematical Methods ’ and ‘ Mathematics, Interdisciplinary Applications ’, which are among the most active categories overall. Most application-oriented categories have an average publication year around 2010, with some variation. Categories in which the contributions appear significantly earlier (average before 2008) are ‘ Engineering, Civil ’, ‘ Nuclear Science and Technology ’, ‘ Public, Environmental, and Occupational Health ’ and ‘ Environmental Sciences ’. More recently emerging categories (average after 2012) include ‘ Meteorology and Atmospheric Sciences ’ and ‘ Geosciences, Multidisciplinary ’. In terms of research impact, it is found that several categories from cluster #5 ‘ Psychology and Social Sciences ’ are highly impactful, including ‘ Information Science and Library Science ’, ‘ Social Sciences, Mathematical Methods ’, ‘ Health Policy and Services ’, and ‘ Health Care Sciences and Services ’. In other science clusters, impactful categories are ‘ Mathematics, Interdisciplinary Applications ’ (cluster #4), ‘ Medical Informatics ’ and ‘ Medicine, General and Internal ’ (cluster #1). Remarkably, highly productive application-focused categories in other scientific clusters are much less academically impactful, with even categories which became active comparatively early, such as ‘ Environmental Sciences ’ and ‘ Water Resources ’ (cluster #3), ‘ Engineering, Civil ’ (cluster #4), and ‘ Nuclear Science and Technology ’ (cluster #2) receiving few citations on average. This shows that, in general, medicine- and health-related risk communication work is more impactful. Nevertheless, the above-identified recently emerging categories ‘ Meteorology and Atmospheric Sciences ’ and ‘ Geosciences, Multidisciplinary ’ (cluster #3) also have a comparatively high average number of citations and hence academic impact, given their relatively short time to attract citations. 3.4. Journals’ Distribution and Intellectual Base A dual-map overlay analysis is applied to identify highly productive and highly cited journals in the risk communication research domain and to trace their intellectual basis. The results are shown in and . The dual-map overlay analysis is performed using CiteSpace and the journal-based dual-map overlay created by Carley and his colleagues . It shows the journals of a specific dataset (here the risk communication dataset of ) on the global science map of journals. The analysis then traces the cited journals in the reference list of those journals, puts those on another journal overlay map, and links both maps. To facilitate the interpretation, labeled ovals are used to indicate clusters of highly active citing and cited journals. The size of the ovals is proportionate to the number of publications for the citing journals on the left and to the number of citations received from the risk communication articles by a journal on the right. Thus, on the left-hand side of the upper part of , the distribution of risk communication journals on the global science map is shown, whereas the right-hand side shows the distribution of cited journals. The bottom part of further condensed the information by concentrating lines between citing and cited journal clusters. This is done by adjusting the width of the lines proportional to the frequency of citation, making use of the so-called z-score of the citation links . The upper part of shows that risk communication articles are mainly published in ‘ Psychology, Education, Health ’ and ‘ Medicine, Medical, Clinical ’ journal groups. The cited journals, which can be considered to constitute the intellectual basis of the research domain, are primarily clustered in the ‘ Health, Nursing, Medicine ’ and ‘ Psychology, Education, Social ’ journal groups. The lower part of shows the main journal groups and their connections, where the line widths are scaled using the z-score. It is seen that journals from the ‘ Psychology, Education, Health ’ journal groups in risk communication research mainly have cited journals from the ‘ Health, Nursing, Medicine ’ and ‘ Psychology, Education, Social’ groups. The citing journals from ‘ Medicine, Medical, Clinical ’ have predominantly cited journals from the ‘ Health, Nursing, Medicine ’ group. This is also reflected in the results of the calculated z-scores for the citation trends at the domain level, as shown in . It is also seen that nearly all citing journal groups cite journals from the ‘ Psychology, Education, Social’ journal group, while furthermore relying on a relatively small group of journal domains, mostly health- and environment-related. This implies that, despite the high level of interdisciplinarity as found in , the intellectual basis of risk communication research remains relatively focused within specific scientific subdomains. Articles furthermore appear to often cite articles from their own journal group. shows the top 10 highly productive citing journals of the risk communication research domain, as well as the journals with the highest number of citations. It is seen that Risk Analysis and Journal of Risk Research are by far the most productive journals, followed at a distance by medical- and health-related journals such as Drug Safety and Journal of Health Communication . For the cited journals, it is found that by far most references are received by Risk Analysis , with British Medical Journal , Medical Decision Making , Journal of Risk Research , and Science . 3.5. Terms Analysis: Narrative Patterns The automatic term identification method in the VOSviewer software is applied to extract terms and noun phrases related to the risk communication dataset of . In the present work, these are extracted from the title, abstract, and keywords. Only terms which appeared at least five times are retained for further analysis, with similar terms are merged to increase clarity in and focus of the results, as is commonly recommended in scientometric analyses . In total, 458 terms are retained, which are clustered using VOSviewer and subsequently transformed in heat maps to identify concentrations of higher activity. shows the dominant narrative patterns of the entire dataset, indicating the existence of two large clusters. lists the terms analysis results for these two clusters, along with additional information such as the number of occurrences, the average publication year in which the terms appeared, and the average citations received. Additionally, and show a term density map of the term clusters by average year of publication of the terms, which highlights the temporal evolution of the clusters. In the left cluster in (Cluster A in ), the main terms are ‘ agency ’, ‘ government ’, ‘ stakeholder ’, ‘ organization ’, and ‘ case study ’, whereas in the right cluster (Cluster B in ), the most frequently occurring terms are ‘ patient ’, ‘ intervention ’, ‘ decision making ’, ‘ probability ’, and ‘ woman ’. On a high level, this indicates that the risk communication domain contains two major domains of work. On the one hand, there is a role for risk communication in societal risk governance, where governmental agencies interact with stakeholders from industry, the public, and academics in regard to societal risks, as in the IRGC risk governance framework mentioned in the introduction. On the other hand, there is an important role for risk communication on a more personal level in medical contexts, where medical practitioners interact with patients about treatments of specific medical conditions, as in the guidance by the Risk Communication Institute . The most frequently occurring keywords here are ‘ patient ’, ‘ intervention ’, ‘ decision making ’, ‘ probability ’, and ‘ woman ’. and show that risk issues around ‘ public health ’, ‘ food ’, ‘ floods ’, ‘ disasters ’, (disease) ‘ outbreak ’, and ‘ emergency ’ are important topics in cluster A (societal risk governance). Methodological and conceptual aspects of risk communication in societal risk governance such as ‘ debate ’, ‘ public perception ’, ‘ dialogue ’, ‘ social medium ’, and ‘ credibility ’ are important in this narrative. From and and , it is found that earlier narratives were more strongly focused on government agencies, industry, scientists, and public participation. Topics included public health, environmental risks, and food. Dominant narratives after 2010 became stakeholders and organizations, with more attention to emergencies, crises, disasters, preparedness, outbreaks and disease control, and consumer products. Academically impactful methodological narratives in Cluster A revolve around communicators, communication efforts and efficacy, audience, public perception, and public participation. Impactful topic-focused narratives concern disaster, crisis, emergency, and flood. In Cluster B (medical risk communication), important narratives revolve around risk issues such as ‘ treatment ’, ‘ age ’, ‘ family ’, ‘ cancer ’, ‘ diagnosis ’, ‘ medicine ’, and ‘ screening ’. Methodological and conceptual aspects of medical risk communication include ‘ probability ’, ‘ scale ’, ‘ scenario ’, ‘ skill ’, ‘ decision making ’, ‘ test ’, and ‘ patient knowledge ’. Inspecting and and shows that narratives around decision making, probability, treatment, cancer, family, woman, and consultation were dominant before 2010. After 2010, narratives focused more on patients, intervention, risk factors, age, and intentions. Academically impactful narratives in Cluster B involve skill, relative risk, scale, decision making, subject, systematic review, tests, and frequency. Overall, the results show that some narratives are rather robust in the risk communication research domain, with a continued focus on patient-, treatment-, and risk-related information in Cluster B and a continued attention to societal health risks. The results also indicate that risk communication in emergency and disaster contexts has become a topic of academic interest more recently. 3.6. Cited References—Research Fronts CiteSpace is applied in this section to perform a co-citation analysis of the risk communication dataset of in order to determine research clusters based on co-citation information. Co-occurrence of certain references in a set of articles within a research domain is a commonly used technique in scientometric research to identify clusters . Highly cited references within these clusters can be understood as the intellectual basis of the subdomains and represent key knowledge carriers for the development of the research domain. Articles citing the largest number of references from a cluster are known as ‘research fronts’. These can be seen as spearheading contributions leading the development of the research domain, and together they provide insight in the overall evolution of the research domain in terms of focus topics . In order to obtain a clear structure of the results, the co-citation analysis is here performed for the entire timespan of the dataset (1985–2019), using a time slice length of one year, an eight year look-back period of considering cited references, and a minimum of two citations per period. The resulting co-citation network has 1157 nodes and 3924 co-citation links. The largest connected component of this co-citation network is shown in to show the most important parts of the structure and the intellectual basis of the research clusters. The labels of the clusters determined by CiteSpace are extracted from the title of the citing publications, based on the log-likelihood ratio (LLR) method. In the figure, the node sizes are proportional to the number of citations of a publication, while the colors of the links between articles indicates the year when two documents were first cited together. The color shade of the clusters indicates the average publication year of the references. The main analysis results of the co-citation analysis for the largest network of connected clusters is shown in . This table shows the name of the research cluster, the number of references included in the cluster, the associated article representing the research front, the average year of publication of the cited references, and the silhouette value. The silhouette value of a cluster ranges from −1 to 1 and indicates the uncertainty which needs to be considered when interpreting the nature of the cluster. A value of 1 represents a perfect separation from other clusters . In , the five most highly cited references in each research cluster are shown. As explained above, these can be considered as the intellectual base of each subdomain of risk communication research. provides additional information of the top five highly cited references in the largest co-citation clusters, defined here as clusters with a minimum of 50 articles, as shown as well in . Only references with a minimum of five citations are retained. The landscape and time evolution of the clusters shows that the earliest research fronts of risk communication research focus on ‘ υ Industrial Contamination ’ and ‘ σ Public Health ’, with 1982 and 1986 being the average publication years of the cited references, respectively. This indicates that risk communication research arose from a practical need to inform the public about health and environmental risks. Thereafter, there were several research clusters which focused on better understanding risk communication as an activity in itself, which can be considered as a type of fundamental risk research . These include ‘ δ Rational Public Discourse ’ (average publication year of cited references: 1988), ‘ β Learning through Conflict ’ (1989), ‘ π Intended vs. Received Message ’ (2000), and ‘ φ Aggressive Risk Communication ’ (2012). Nevertheless, the bulk of the risk communication research clusters remained focused on specific risk issues throughout the evolution of the research domain, in line with societal concerns or contemporary focus topics in medical research. Examples of such research clusters associated with the societal risk governance cluster (Cluster A of ) include ‘ ξ Nuclear Power ’ (1986), ‘ η Epidemic and Bioterrorism ’ (1996), ‘ μ Natural Disaster Evacuation ’ (2005), ‘ ζ Flood Risk Communication ’ (2009), and ‘ θ Hurricane Risk ’ (2013). Examples of research clusters associated with medical risk communication research (Cluster B of ) include ‘ κ Supervision Register ’ (1992), ‘ λ Patient Risk Communication Effectiveness ’ (1997), ‘ ε General Practice Patient Involvement ’ (2001), and ‘ ι Pharmaceutical Risk–Benefit ’ (2012). Referring to and , the largest cluster spans 84 articles with a silhouette value of 0.769, indicating a relatively large overlap with other clusters. It is labeled ‘ α Pictographs ’ based on LLR analysis. The research front is , which focuses on the use of pictographs for communicating medical screening information to persons with higher and lower numeracy skills. This cluster is associated with Cluster B (medical risk communication) of , draws on ‘ Health, Nursing, and Medicine ’ and ‘ Psychology, Education, Social ’ journals of the global science map of , and involves scientific categories in the clusters ‘ #1 Biology and Medicine ’ and ‘ #5 Psychology and Social Sciences ’ on the global science map of . The most highly cited reference in this cluster is , which focuses on best practices on conveying health risks using numeric, verbal, and visual formats. Other highly cited references include , which focus on patient understanding of risks, numeracy, and the relation to decision making. The cluster also contains a review on the use of probability information in risk communication . The second largest cluster is labeled ‘ β Learning through Conflict ’. It includes 78 references with a silhouette value of 0.931, indicating that it is well separated from other clusters. Its research front is , which focuses on the role of conflict in risk communication, as a means of learning in contexts where controversy exists between stakeholders. This cluster is associated with Cluster A (societal risk governance) of , draws on mainly on journals from ‘ Psychology, Education, Social ’ journals on the global science map of , and is based on the scientific category ‘ #5 Psychology and Social Sciences ’ on the global science map of . The reference with highest number of citations is , a book on risk communication aimed at decision-makers in government and industry, highlighting both the importance of procedure and content of risk messages. Other significant references are , a manual for industrial managers outlining a number of key rules for communicating with the public; , which outlines a mental model of how lay people respond to environmental hazards; and , which studies differences in lay and expert judgments of toxicological risks. The third largest cluster is labeled ‘ γ Food Risk Communication ’, which includes 75 references with an average publication year of 2003 and a silhouette value of 0.867, indicating a reasonable separation of other research clusters. Its research front is , which describes the history of risk communication, summarizes theoretical avenues, and provides research directions in food-related risks. It highlights media amplification, public trust, and communication of uncertainty as essential ingredients. This cluster is associated with Cluster A (societal risk governance) of , draws on mainly on journals from ‘ Psychology, Education, Social ’ journals on the global science map of , and is based on the scientific categories ‘ #5 Psychology and Social Sciences ’ and ‘ #1 Biology and Medicine ’ on the global science map of . The reference with highest number of citations in this cluster is , a highly influential book in risk research, focusing on the conceptual and methodological basis of risk perception and its implications. Other impactful references include , a book outlining the social amplification of risk framework, and , a book describing theory and applications of a mental models approach to risk communication. The last two highly impactful references in this cluster are , a review article describing the evolution of some major developments in risk communication in the period 1996–2005, and , a book outlining four risk management strategies (political regulatory process, public deliberation, the technocratic/scientific perspective, and strictly economics-based risk management) and risk management case studies in Germany, the USA, and Sweden. The fourth largest cluster is ‘ δ Rational Public Discourse ’, which includes 69 references with an average publication year of 1988. Its silhouette value is 0.898, indicating a high degree of separation of other co-citation clusters. The research front of this cluster is , which discusses a communication process between stakeholders with conflicting interests from the viewpoint of message recognition, inducing attitude and behavior changes, and conflict resolution. This cluster is associated with Cluster A (societal risk governance) of , is based on knowledge from journals related to ‘ Psychology, Education, Social ’ on the global science map of , and is strongly rooted in the scientific category ‘ #5 Psychology and Social Sciences ’ on the global science map of . The reference with the highest number of citations in this cluster is , an influential book on risk communication, introducing it as a technical and cultural phenomenon. Another influential reference is , a book outlining seven cardinal rules for effective risk communication in environmental risk management. The final two highly influential references in this cluster are , which introduces the social amplification of risk framework, and , which presents results of a study on risk communication in response to public concerns about geological radon health hazards. The fifth largest co-citation cluster, spanning 61 references with an average publication year of 2001, is labeled ‘ ε General Practice Patient Involvement ’. It has a silhouette value of 0.835, indicating a relatively large overlap with other clusters. The research front of this cluster is , which presents results of a study on the use of risk communication for shared decision making in general practice. It is associated with Cluster B (medical risk communication) of and involves knowledge from journals related to ‘ Health, Nursing, Medicine ’ and ‘ Psychology, Education, Social ’ on the global science map of . It involves interdisciplinary scientific categories, bridging the scientific domains ‘ #1 Biology and Medicine ’ and ‘ #5 Psychology and Social Sciences ’ on the global science map of . The most impactful reference in this cluster is , which studies how numerical information can be visually represented to support dialogue and risk communication in medicine. Other highly impactful references include , which concern various aspects of the visual communication of medical-related risks and the impacts on effectiveness of patient decision making. The references address case studies of representation of risk information related to violence and cancer, whereas address patient participation and teaching and learning in shared decision making. The sixth largest research cluster spans 56 references with an average publication year of 2009 and is labeled ‘ ζ Flood Risk Communication ’. With a silhouette value of 0.852, it has a relatively large overlap with other clusters. The research front of this cluster is , which describes a best practices model for risk communication and management in environmental hazards related to floods. This cluster is associated with Cluster A (societal risk governance) of , relies on journals focusing on ‘ Psychology, Education, Social ’ on the global science map of , and bridges the scientific domains ‘ #5 Psychology and Social Sciences ’ and ‘ #3 Ecology and Environmental Science and Technology ’ on the global science map of . The highest cited reference in this cluster is , which builds on an extensive body of risk communication literature to address four questions about risk communication, including how to communicate uncertainty, how declining trust can be handled, and what the lessons learned from earlier work can be used to define new principles for risk communication. Other influential work in this cluster includes , which addresses risk perception and communication in natural hazards; , which review perceptions on flood risks and associated flood mitigation behavior; and , a book outlining an earlier version of the risk governance framework by the International Risk Governance Council . Finally, the seventh largest co-citation cluster is labeled ‘ η Epidemic and Bioterrorism ’. With 55 references and an average publication year of 1996, it is the last cluster with more than 50 references included. It has a silhouette value of 0.934, indicating a good separation from other co-citation clusters. Its research front is , which is a highly impactful article describing risk perceptions and communication strategies for release of a biohazard pathogen in an urban setting. This cluster is associated with Cluster A (societal risk governance) of , relies on journals focusing on ‘ Psychology, Education, Social ’ and ‘ Health, Nursing, Medicine ’ on the global science map of , and bridges the scientific domains ‘ #5 Psychology and Social Sciences ’ and ‘ #1 Biology and Medicine ’ on the global science map of . The highest cited references in this cluster are , which describes the evolution of 20 years of risk perception and risk communication research; , which addresses the issue of various scales of risk as a challenge for risk communication; and , which presents an analytical–deliberative process for risk communication. The annual trend of publication activity in the risk communication research domain is shown in . The first article was published in 1985, entitled “A Nonadvocate Model for Health Risk Communications”, authored by Petcovic and Johnson . This indicates that risk communication research originates from a practical need to inform patients about health risks. The global trend of annual number of publications and the associated cumulative number shows an exponential increase. After a period with only a handful of publications annually at the initial stage of development of the research domain in the mid-1980s, a relatively steady stream of about 15 articles per year was published between about 1990 and 2000. From then onwards, the number of publications escalated quickly, with an increase to over 70 articles published annually after 2015. The research volume before 1990 amounts to 2.9% of the total, with the relative share of the period of 1990–1999 rising to 12.0%, further increasing to 29.3% in the period of 2000–2009, and finally reaching 55.8% in the period after 2010. This shows that risk communication research has experienced a rather dramatic increase in research productivity since its inception. shows the geographic distribution of risk communication research globally. It is seen that, in total, 63 countries/regions have contributed to the 1196 documents comprising the dataset obtained in . The most productive countries, defined here as those with more than five publications, are listed in . For these countries, additional metrics including the average publication year and the average number of citations are determined as well. It is seen that the vast majority of risk communication research originates from Western countries, with the United States of America (502 articles, 41.9%), the United Kingdom (177, 14.8%), Germany (93, 7.8%), the Netherlands (68, 5.7%), and Canada (58, 4.8%) comprising the top five most productive countries. The dominance of North America and Western Europe in research productivity is striking, while the research activity in Oceania, Asia, Eastern Europe, South America, and Africa is much lower. Australia and Japan are the only countries outside North America or Europe in the top 10. Within Europe, by far, most of the work originates from the United Kingdom, Germany, and the Netherlands, with Italy, Sweden, France, Norway, and Spain also contributing moderately. Eastern Europe is very poorly represented in risk communication research. In Asia, the research is most developed in the Far East, including Japan, the People’s Republic of China, and South Korea. Despite the lower productivity in absolute terms, it is found that some countries in the list of , such as the People’s Republic of China and South Korea, have only relatively recently become active in this research domain. The top five most productive countries have been active for a much longer time, as seen from their comparatively low average year of publication. In terms of impact, the top highly productive countries also generally contribute the most impactful research. As is seen from the average number of citations, research originating from the USA, UK, Canada, and the Netherlands has attracted most citations on average, while work from some less productive countries including Switzerland, Israel, and Belgium also ranks relatively highly. The scientific impact of other countries is in general rather low, with average citation rates of around 5. This underscores the dominance of North America and Western Europe in the risk communication research domain. The country collaboration network, shown in , shows that the most active countries in North American and Western Europe, the United States of America and the United Kingdom, are also the ones with most international collaborations. Transatlantic collaboration is strongest between the USA and the UK, but Germany and the Netherlands also have such links. While the USA has the strongest academic links with Canada, Australia, Japan, China, and South Korea, the UK has stronger links with other European countries. Each journal in the Web of Science Core Collection is classified according to different scientific categories. This categorization serves as a marker of the scientific disciplines and domains with which the journals are concerned. Aggregating these categorizations over the complete dataset obtained in provides insights into how the risk communication research domain situates in the entire body of scientific knowledge. The distribution of scientific categories associated with risk communication is shown on the global science map using the VOSviewer software . The results are shown in , where the global scientific categories are grouped in five clusters. These are #1 ‘ Biology and Medicine ’, #2 ‘ Chemistry and Physics ’, #3 ‘ Ecology and Environmental Science and Technology ’, #4 ‘ Engineering and Mathematics ’, and #5 ‘ Psychology and Social Sciences ’. provides an overview of the most frequently occurring scientific categories in risk communication research, here defined as categories in which at least 20 articles are classified. Furthermore, the average publication year and average number of citations of these categories are shown, providing insight in the temporal evolution of and the scientific impact associated with these categories. The table also indicates which cluster of the scientific category is located in, for easier interpretation of the figure. The results indicate that risk communication research is primarily located in the ‘ Psychology and Social Sciences ’ scientific domain (cluster #5). Within that cluster, the scientific categories ‘ Public, Environmental, and Occupational Health ’ (362 articles, 30.3% of the total dataset), ‘ Social Sciences, Mathematical Methods ’ (89, 7.4%), ‘ Social Sciences, Interdisciplinary ’ (86, 7.2%), ‘ Communication ’ (67, 5.6%), and ‘ Psychology, Multidisciplinary ’ (42, 3.5%) are the most actively contributing. The second most prevalent scientific domain is ‘ Biology and Medicine ’ (cluster #1), in which the scientific categories ‘ Medicine, General and Internal ’ (67, 5.6%), ‘ Toxicology ’ (65, 5.4%), ‘ Pharmacology and Pharmacy ’ (64, 5.3%), ‘ Oncology ’ (42, 3.5%), and ‘ Food Science and Technology ’ (39, 3.3%) are the highest contributors. The third most significantly contributing scientific domain is ‘ Ecology and Environmental Science and Technology ’ (cluster #3). Here, the scientific categories ‘ Environmental Sciences ’ (105, 8.8%), ‘ Water Resources ’ (41, 3.4%), ‘ Meteorology and Atmospheric Sciences ’ (34, 2.8%), and ‘ Geosciences, Multidisciplinary ’ (24, 2.0%) are highly contributing scientific categories. The scientific domains ‘ Engineering and Mathematics ’ (cluster #4) and ‘Chemistry and Physics’ (cluster #2) are contribute significantly less to the risk communication research domain, with only ‘ Mathematics, Interdisciplinary Application ’ (88, 7.4%), ‘ Nuclear Science and Technology ’ (28, 2.3%), and ‘ Engineering, Civil ’ (21, 1.8%) being highly contributing scientific categories. Apart from highlighting the main contributing scientific categories, the visualization of risk communication research on the global science map in also indicates that this research domain is highly interdisciplinary. While the research domain appears to have a very application-oriented focus, especially on health and environmental risks, its scientific foundations lie in social sciences. Furthermore, mathematical methods and their interdisciplinary application in social sciences also are an important aspect in the research domain. While there are some generic scientific categories of the social sciences represented, e.g., ‘ Social Sciences, Interdisciplinary ’ and ‘ Psychology, Multidisciplinary ’, the only significantly contributing specific communications-oriented social science categories with specific relevance to the domain’s conceptual basis are ‘ Communication ’ and ‘ Information Science and Library Science ’. This shows that most work in the risk communication domain originates from practical needs in specific risk management and governance contexts, rather than as a subdiscipline from communications research. To further support the finding that risk communication is highly interdisciplinary, the Stirling-Rao diversity index is calculated. This metric measures the aggregate distance between connected scientific categories, giving more weight to connected article pairs associated with more distant categories . For the risk communication research domain, the global diversity index is 0.803, which is a very high score. This indicates that there is a high diversity in scientific categories concerned with this domain, and that these collectively contribute to the knowledge production. Focusing on , the average year in which articles in a category are published shows that the oldest categories are ‘ Social Sciences, Mathematical Methods ’ and ‘ Mathematics, Interdisciplinary Applications ’, which are among the most active categories overall. Most application-oriented categories have an average publication year around 2010, with some variation. Categories in which the contributions appear significantly earlier (average before 2008) are ‘ Engineering, Civil ’, ‘ Nuclear Science and Technology ’, ‘ Public, Environmental, and Occupational Health ’ and ‘ Environmental Sciences ’. More recently emerging categories (average after 2012) include ‘ Meteorology and Atmospheric Sciences ’ and ‘ Geosciences, Multidisciplinary ’. In terms of research impact, it is found that several categories from cluster #5 ‘ Psychology and Social Sciences ’ are highly impactful, including ‘ Information Science and Library Science ’, ‘ Social Sciences, Mathematical Methods ’, ‘ Health Policy and Services ’, and ‘ Health Care Sciences and Services ’. In other science clusters, impactful categories are ‘ Mathematics, Interdisciplinary Applications ’ (cluster #4), ‘ Medical Informatics ’ and ‘ Medicine, General and Internal ’ (cluster #1). Remarkably, highly productive application-focused categories in other scientific clusters are much less academically impactful, with even categories which became active comparatively early, such as ‘ Environmental Sciences ’ and ‘ Water Resources ’ (cluster #3), ‘ Engineering, Civil ’ (cluster #4), and ‘ Nuclear Science and Technology ’ (cluster #2) receiving few citations on average. This shows that, in general, medicine- and health-related risk communication work is more impactful. Nevertheless, the above-identified recently emerging categories ‘ Meteorology and Atmospheric Sciences ’ and ‘ Geosciences, Multidisciplinary ’ (cluster #3) also have a comparatively high average number of citations and hence academic impact, given their relatively short time to attract citations. A dual-map overlay analysis is applied to identify highly productive and highly cited journals in the risk communication research domain and to trace their intellectual basis. The results are shown in and . The dual-map overlay analysis is performed using CiteSpace and the journal-based dual-map overlay created by Carley and his colleagues . It shows the journals of a specific dataset (here the risk communication dataset of ) on the global science map of journals. The analysis then traces the cited journals in the reference list of those journals, puts those on another journal overlay map, and links both maps. To facilitate the interpretation, labeled ovals are used to indicate clusters of highly active citing and cited journals. The size of the ovals is proportionate to the number of publications for the citing journals on the left and to the number of citations received from the risk communication articles by a journal on the right. Thus, on the left-hand side of the upper part of , the distribution of risk communication journals on the global science map is shown, whereas the right-hand side shows the distribution of cited journals. The bottom part of further condensed the information by concentrating lines between citing and cited journal clusters. This is done by adjusting the width of the lines proportional to the frequency of citation, making use of the so-called z-score of the citation links . The upper part of shows that risk communication articles are mainly published in ‘ Psychology, Education, Health ’ and ‘ Medicine, Medical, Clinical ’ journal groups. The cited journals, which can be considered to constitute the intellectual basis of the research domain, are primarily clustered in the ‘ Health, Nursing, Medicine ’ and ‘ Psychology, Education, Social ’ journal groups. The lower part of shows the main journal groups and their connections, where the line widths are scaled using the z-score. It is seen that journals from the ‘ Psychology, Education, Health ’ journal groups in risk communication research mainly have cited journals from the ‘ Health, Nursing, Medicine ’ and ‘ Psychology, Education, Social’ groups. The citing journals from ‘ Medicine, Medical, Clinical ’ have predominantly cited journals from the ‘ Health, Nursing, Medicine ’ group. This is also reflected in the results of the calculated z-scores for the citation trends at the domain level, as shown in . It is also seen that nearly all citing journal groups cite journals from the ‘ Psychology, Education, Social’ journal group, while furthermore relying on a relatively small group of journal domains, mostly health- and environment-related. This implies that, despite the high level of interdisciplinarity as found in , the intellectual basis of risk communication research remains relatively focused within specific scientific subdomains. Articles furthermore appear to often cite articles from their own journal group. shows the top 10 highly productive citing journals of the risk communication research domain, as well as the journals with the highest number of citations. It is seen that Risk Analysis and Journal of Risk Research are by far the most productive journals, followed at a distance by medical- and health-related journals such as Drug Safety and Journal of Health Communication . For the cited journals, it is found that by far most references are received by Risk Analysis , with British Medical Journal , Medical Decision Making , Journal of Risk Research , and Science . The automatic term identification method in the VOSviewer software is applied to extract terms and noun phrases related to the risk communication dataset of . In the present work, these are extracted from the title, abstract, and keywords. Only terms which appeared at least five times are retained for further analysis, with similar terms are merged to increase clarity in and focus of the results, as is commonly recommended in scientometric analyses . In total, 458 terms are retained, which are clustered using VOSviewer and subsequently transformed in heat maps to identify concentrations of higher activity. shows the dominant narrative patterns of the entire dataset, indicating the existence of two large clusters. lists the terms analysis results for these two clusters, along with additional information such as the number of occurrences, the average publication year in which the terms appeared, and the average citations received. Additionally, and show a term density map of the term clusters by average year of publication of the terms, which highlights the temporal evolution of the clusters. In the left cluster in (Cluster A in ), the main terms are ‘ agency ’, ‘ government ’, ‘ stakeholder ’, ‘ organization ’, and ‘ case study ’, whereas in the right cluster (Cluster B in ), the most frequently occurring terms are ‘ patient ’, ‘ intervention ’, ‘ decision making ’, ‘ probability ’, and ‘ woman ’. On a high level, this indicates that the risk communication domain contains two major domains of work. On the one hand, there is a role for risk communication in societal risk governance, where governmental agencies interact with stakeholders from industry, the public, and academics in regard to societal risks, as in the IRGC risk governance framework mentioned in the introduction. On the other hand, there is an important role for risk communication on a more personal level in medical contexts, where medical practitioners interact with patients about treatments of specific medical conditions, as in the guidance by the Risk Communication Institute . The most frequently occurring keywords here are ‘ patient ’, ‘ intervention ’, ‘ decision making ’, ‘ probability ’, and ‘ woman ’. and show that risk issues around ‘ public health ’, ‘ food ’, ‘ floods ’, ‘ disasters ’, (disease) ‘ outbreak ’, and ‘ emergency ’ are important topics in cluster A (societal risk governance). Methodological and conceptual aspects of risk communication in societal risk governance such as ‘ debate ’, ‘ public perception ’, ‘ dialogue ’, ‘ social medium ’, and ‘ credibility ’ are important in this narrative. From and and , it is found that earlier narratives were more strongly focused on government agencies, industry, scientists, and public participation. Topics included public health, environmental risks, and food. Dominant narratives after 2010 became stakeholders and organizations, with more attention to emergencies, crises, disasters, preparedness, outbreaks and disease control, and consumer products. Academically impactful methodological narratives in Cluster A revolve around communicators, communication efforts and efficacy, audience, public perception, and public participation. Impactful topic-focused narratives concern disaster, crisis, emergency, and flood. In Cluster B (medical risk communication), important narratives revolve around risk issues such as ‘ treatment ’, ‘ age ’, ‘ family ’, ‘ cancer ’, ‘ diagnosis ’, ‘ medicine ’, and ‘ screening ’. Methodological and conceptual aspects of medical risk communication include ‘ probability ’, ‘ scale ’, ‘ scenario ’, ‘ skill ’, ‘ decision making ’, ‘ test ’, and ‘ patient knowledge ’. Inspecting and and shows that narratives around decision making, probability, treatment, cancer, family, woman, and consultation were dominant before 2010. After 2010, narratives focused more on patients, intervention, risk factors, age, and intentions. Academically impactful narratives in Cluster B involve skill, relative risk, scale, decision making, subject, systematic review, tests, and frequency. Overall, the results show that some narratives are rather robust in the risk communication research domain, with a continued focus on patient-, treatment-, and risk-related information in Cluster B and a continued attention to societal health risks. The results also indicate that risk communication in emergency and disaster contexts has become a topic of academic interest more recently. CiteSpace is applied in this section to perform a co-citation analysis of the risk communication dataset of in order to determine research clusters based on co-citation information. Co-occurrence of certain references in a set of articles within a research domain is a commonly used technique in scientometric research to identify clusters . Highly cited references within these clusters can be understood as the intellectual basis of the subdomains and represent key knowledge carriers for the development of the research domain. Articles citing the largest number of references from a cluster are known as ‘research fronts’. These can be seen as spearheading contributions leading the development of the research domain, and together they provide insight in the overall evolution of the research domain in terms of focus topics . In order to obtain a clear structure of the results, the co-citation analysis is here performed for the entire timespan of the dataset (1985–2019), using a time slice length of one year, an eight year look-back period of considering cited references, and a minimum of two citations per period. The resulting co-citation network has 1157 nodes and 3924 co-citation links. The largest connected component of this co-citation network is shown in to show the most important parts of the structure and the intellectual basis of the research clusters. The labels of the clusters determined by CiteSpace are extracted from the title of the citing publications, based on the log-likelihood ratio (LLR) method. In the figure, the node sizes are proportional to the number of citations of a publication, while the colors of the links between articles indicates the year when two documents were first cited together. The color shade of the clusters indicates the average publication year of the references. The main analysis results of the co-citation analysis for the largest network of connected clusters is shown in . This table shows the name of the research cluster, the number of references included in the cluster, the associated article representing the research front, the average year of publication of the cited references, and the silhouette value. The silhouette value of a cluster ranges from −1 to 1 and indicates the uncertainty which needs to be considered when interpreting the nature of the cluster. A value of 1 represents a perfect separation from other clusters . In , the five most highly cited references in each research cluster are shown. As explained above, these can be considered as the intellectual base of each subdomain of risk communication research. provides additional information of the top five highly cited references in the largest co-citation clusters, defined here as clusters with a minimum of 50 articles, as shown as well in . Only references with a minimum of five citations are retained. The landscape and time evolution of the clusters shows that the earliest research fronts of risk communication research focus on ‘ υ Industrial Contamination ’ and ‘ σ Public Health ’, with 1982 and 1986 being the average publication years of the cited references, respectively. This indicates that risk communication research arose from a practical need to inform the public about health and environmental risks. Thereafter, there were several research clusters which focused on better understanding risk communication as an activity in itself, which can be considered as a type of fundamental risk research . These include ‘ δ Rational Public Discourse ’ (average publication year of cited references: 1988), ‘ β Learning through Conflict ’ (1989), ‘ π Intended vs. Received Message ’ (2000), and ‘ φ Aggressive Risk Communication ’ (2012). Nevertheless, the bulk of the risk communication research clusters remained focused on specific risk issues throughout the evolution of the research domain, in line with societal concerns or contemporary focus topics in medical research. Examples of such research clusters associated with the societal risk governance cluster (Cluster A of ) include ‘ ξ Nuclear Power ’ (1986), ‘ η Epidemic and Bioterrorism ’ (1996), ‘ μ Natural Disaster Evacuation ’ (2005), ‘ ζ Flood Risk Communication ’ (2009), and ‘ θ Hurricane Risk ’ (2013). Examples of research clusters associated with medical risk communication research (Cluster B of ) include ‘ κ Supervision Register ’ (1992), ‘ λ Patient Risk Communication Effectiveness ’ (1997), ‘ ε General Practice Patient Involvement ’ (2001), and ‘ ι Pharmaceutical Risk–Benefit ’ (2012). Referring to and , the largest cluster spans 84 articles with a silhouette value of 0.769, indicating a relatively large overlap with other clusters. It is labeled ‘ α Pictographs ’ based on LLR analysis. The research front is , which focuses on the use of pictographs for communicating medical screening information to persons with higher and lower numeracy skills. This cluster is associated with Cluster B (medical risk communication) of , draws on ‘ Health, Nursing, and Medicine ’ and ‘ Psychology, Education, Social ’ journals of the global science map of , and involves scientific categories in the clusters ‘ #1 Biology and Medicine ’ and ‘ #5 Psychology and Social Sciences ’ on the global science map of . The most highly cited reference in this cluster is , which focuses on best practices on conveying health risks using numeric, verbal, and visual formats. Other highly cited references include , which focus on patient understanding of risks, numeracy, and the relation to decision making. The cluster also contains a review on the use of probability information in risk communication . The second largest cluster is labeled ‘ β Learning through Conflict ’. It includes 78 references with a silhouette value of 0.931, indicating that it is well separated from other clusters. Its research front is , which focuses on the role of conflict in risk communication, as a means of learning in contexts where controversy exists between stakeholders. This cluster is associated with Cluster A (societal risk governance) of , draws on mainly on journals from ‘ Psychology, Education, Social ’ journals on the global science map of , and is based on the scientific category ‘ #5 Psychology and Social Sciences ’ on the global science map of . The reference with highest number of citations is , a book on risk communication aimed at decision-makers in government and industry, highlighting both the importance of procedure and content of risk messages. Other significant references are , a manual for industrial managers outlining a number of key rules for communicating with the public; , which outlines a mental model of how lay people respond to environmental hazards; and , which studies differences in lay and expert judgments of toxicological risks. The third largest cluster is labeled ‘ γ Food Risk Communication ’, which includes 75 references with an average publication year of 2003 and a silhouette value of 0.867, indicating a reasonable separation of other research clusters. Its research front is , which describes the history of risk communication, summarizes theoretical avenues, and provides research directions in food-related risks. It highlights media amplification, public trust, and communication of uncertainty as essential ingredients. This cluster is associated with Cluster A (societal risk governance) of , draws on mainly on journals from ‘ Psychology, Education, Social ’ journals on the global science map of , and is based on the scientific categories ‘ #5 Psychology and Social Sciences ’ and ‘ #1 Biology and Medicine ’ on the global science map of . The reference with highest number of citations in this cluster is , a highly influential book in risk research, focusing on the conceptual and methodological basis of risk perception and its implications. Other impactful references include , a book outlining the social amplification of risk framework, and , a book describing theory and applications of a mental models approach to risk communication. The last two highly impactful references in this cluster are , a review article describing the evolution of some major developments in risk communication in the period 1996–2005, and , a book outlining four risk management strategies (political regulatory process, public deliberation, the technocratic/scientific perspective, and strictly economics-based risk management) and risk management case studies in Germany, the USA, and Sweden. The fourth largest cluster is ‘ δ Rational Public Discourse ’, which includes 69 references with an average publication year of 1988. Its silhouette value is 0.898, indicating a high degree of separation of other co-citation clusters. The research front of this cluster is , which discusses a communication process between stakeholders with conflicting interests from the viewpoint of message recognition, inducing attitude and behavior changes, and conflict resolution. This cluster is associated with Cluster A (societal risk governance) of , is based on knowledge from journals related to ‘ Psychology, Education, Social ’ on the global science map of , and is strongly rooted in the scientific category ‘ #5 Psychology and Social Sciences ’ on the global science map of . The reference with the highest number of citations in this cluster is , an influential book on risk communication, introducing it as a technical and cultural phenomenon. Another influential reference is , a book outlining seven cardinal rules for effective risk communication in environmental risk management. The final two highly influential references in this cluster are , which introduces the social amplification of risk framework, and , which presents results of a study on risk communication in response to public concerns about geological radon health hazards. The fifth largest co-citation cluster, spanning 61 references with an average publication year of 2001, is labeled ‘ ε General Practice Patient Involvement ’. It has a silhouette value of 0.835, indicating a relatively large overlap with other clusters. The research front of this cluster is , which presents results of a study on the use of risk communication for shared decision making in general practice. It is associated with Cluster B (medical risk communication) of and involves knowledge from journals related to ‘ Health, Nursing, Medicine ’ and ‘ Psychology, Education, Social ’ on the global science map of . It involves interdisciplinary scientific categories, bridging the scientific domains ‘ #1 Biology and Medicine ’ and ‘ #5 Psychology and Social Sciences ’ on the global science map of . The most impactful reference in this cluster is , which studies how numerical information can be visually represented to support dialogue and risk communication in medicine. Other highly impactful references include , which concern various aspects of the visual communication of medical-related risks and the impacts on effectiveness of patient decision making. The references address case studies of representation of risk information related to violence and cancer, whereas address patient participation and teaching and learning in shared decision making. The sixth largest research cluster spans 56 references with an average publication year of 2009 and is labeled ‘ ζ Flood Risk Communication ’. With a silhouette value of 0.852, it has a relatively large overlap with other clusters. The research front of this cluster is , which describes a best practices model for risk communication and management in environmental hazards related to floods. This cluster is associated with Cluster A (societal risk governance) of , relies on journals focusing on ‘ Psychology, Education, Social ’ on the global science map of , and bridges the scientific domains ‘ #5 Psychology and Social Sciences ’ and ‘ #3 Ecology and Environmental Science and Technology ’ on the global science map of . The highest cited reference in this cluster is , which builds on an extensive body of risk communication literature to address four questions about risk communication, including how to communicate uncertainty, how declining trust can be handled, and what the lessons learned from earlier work can be used to define new principles for risk communication. Other influential work in this cluster includes , which addresses risk perception and communication in natural hazards; , which review perceptions on flood risks and associated flood mitigation behavior; and , a book outlining an earlier version of the risk governance framework by the International Risk Governance Council . Finally, the seventh largest co-citation cluster is labeled ‘ η Epidemic and Bioterrorism ’. With 55 references and an average publication year of 1996, it is the last cluster with more than 50 references included. It has a silhouette value of 0.934, indicating a good separation from other co-citation clusters. Its research front is , which is a highly impactful article describing risk perceptions and communication strategies for release of a biohazard pathogen in an urban setting. This cluster is associated with Cluster A (societal risk governance) of , relies on journals focusing on ‘ Psychology, Education, Social ’ and ‘ Health, Nursing, Medicine ’ on the global science map of , and bridges the scientific domains ‘ #5 Psychology and Social Sciences ’ and ‘ #1 Biology and Medicine ’ on the global science map of . The highest cited references in this cluster are , which describes the evolution of 20 years of risk perception and risk communication research; , which addresses the issue of various scales of risk as a challenge for risk communication; and , which presents an analytical–deliberative process for risk communication. 4.1. Interpretation of the Results The analysis of research outputs in the risk communication research domain in shows a fast accelerating growth, especially over the last two decades. While it is tempting to conclude that risk communication has become a more popular research topic, this development should be understood in light of general trends in academic publishing. It has been shown that publication rates have increased sharply across the entire scientific enterprise , and similar surges in publication rates have been observed in other risk-related scientometric analyses . Hence, it is not entirely clear whether the increased risk communication research outputs are indeed indicative of its relative increased academic significance, whether these are the result of the increased relevance in societal governance or medical contexts, or whether the trends are caused by internal dynamics of academia. Focusing on the geographical distribution of the research outputs shown in , it is very clear that, for the time being, the domain is very strongly dominated by research originating from Western countries, with the United States of America being by far the largest contributor. Only some Western European countries also have significantly contributed to the domain, with other geographical regions displaying a much lower research output. The research impact shows a similar picture. There are, however, some signs that risk communication research is also gaining importance in non-Western countries, with research originating from, e.g., China, South Korea, and Brazil, being relatively sizeable, especially more recently. This may indicate that the Western dominance may decrease in the future and that new perspectives may enter the research domain. The relative dearth of risk communication research in South American, Eastern European, African, Middle Eastern, Asian, and Oceanian countries/regions may be a reflection of the governance structures of those societies, because governance approaches and mechanisms are necessarily embedded in cultural, organizational, and political contexts . In this context, it is noteworthy that risk communication processes, especially when related to societal governance, have been strongly linked to deliberative processes and hence often assume certain forms of democratic societies with a large role for public engagement and participation . Nevertheless, informing the public in disaster or crisis situations, e.g., related to public health or natural hazards, is likely important irrespective of the political governance style. The activity in the broad co-citation clusters of and the research activity across scientific categories of indicates that risk communication research follows general societal concerns: while health, medicine, and environmental issues are of continuous relevance, concerns about specific technologies such as nuclear power are more transitory. The recent focus on meteorology and atmospheric sciences and geosciences furthermore indicate a broad trend in increased focus on effects of climate change and natural disasters. These broad trends are also found in risk perception research , which is closely linked with risk communication as these are commonly discussed together, especially in a societal risk governance context . The scientific category analysis of and the journals’ distribution and intellectual basis analysis of show that the risk communication body of knowledge is interdisciplinary. Typically, research relies on social science knowledge as a base field, which is linked to more domain-specific knowledge related to medical, environmental, technical, or physical hazards. The results of the co-citation analysis and research fronts described in show that applied research in risk communication appears to follow large societal trends and topics of societal concern, including public health, nuclear power, epidemics, natural disasters, and food regulation. The analysis of narrative patterns in shows that there are two major clusters in terms: one related to societal risk governance and one addressing medical risk communication. These address largely different problems, in that the former focuses on communication between different societal stakeholders with possibly conflicting value systems and objectives, while the latter is concerned with interpersonal communication in a trust relationship between medical practitioner and patient (or family). The term analysis of , and and show that these clusters are mostly disjoint. The results of the co-citation cluster results of also show that medicine-related risk communication knowledge remains mostly in that broad subdomain of the research field, while application-specific knowledge related to other societal risks also is rather contained, as can be judged from the mostly very high silhouette values. Finally, in the analysis of journal distribution in , it is noteworthy that Risk Analysis and Journal of Risk Research , which have been identified as core journals in risk and safety research , concentrate a large body of work on risk communication, while also having a significant scientific impact in this domain. This can be seen to support arguments that risk research is not merely an interdisciplinary or transdisciplinary research activity, but that it has its own foundational basis of concepts, theories, models, and approaches, and that it hence could be seen as a research domain in its own right . As noted by one of the reviewers of an earlier version of this manuscript, the answers found to the questions stated in the introduction are in line with what knowledgeable domain experts would expect. Through a data-driven approach, the results obtained in this work substantiate these intuitive expert insights, providing evidence for the stated research questions, while raising new questions and research directions, as outlined next. 4.2. Future Work Based on the results of as discussed in , a number of avenues for future work can be identified, either to further develop the research domain of risk communication itself, or to better understand how it has developed as a domain of scientific activity. Considering that risks are mediated and socially conveyed differently across varying cultural traditions , the lack of risk communication research in non-Western societies may result in a culturally biased approach. Hence, available theories, models, or conceptual frameworks for risk communication may need modification or elaboration to account for different social traditions, world views, or knowledge systems. More future research in non-Western countries may therefore lead to new fundamental insights and applications in the research domain. Based on the findings that applied risk communication research focuses on issues of contemporary societal importance, there are various new directions for future risk communication research. Major global risks are one important line of work. Judging from The Global Risks Report 2020 and considering that perceptions of importance of societal risks are usually mostly driven by the severity of potential impacts , such research could focus on climate action failure, weapons of mass destruction, biodiversity loss, extreme weather, water crises, information infrastructure breakdown, natural disasters, cyberattacks, human-made environmental disasters, and infectious diseases. Other topics for risk communication may concern new technological developments or consumer products. Risk communication research may be especially relevant where such technologies lead to concerns about human health or safety, in particular in contexts where uncertainty and ambiguity are societally important dimensions of risk. One example of this is the human health concerns related to the adoption of 5G wireless communication technology , about which conspiracy theories circulate on social media platforms, linking 5G technology to the COVID-19 pandemic . Another example may be the industrial developments towards autonomous cars and maritime autonomous surface ships. Risk perceptions may be important factors in certain consumer or societal stakeholder groups to oppose the adoption of such new technologies . Risk communication research can help to inform and interact with the public for such new developments. Based on the finding that risk communication research related to societal risk governance issues and research in medical contexts are as yet largely separated areas of work, it may be fruitful for future research to identify links between certain themes within these subdomains. Such knowledge exchange can, for instance, lead to new conceptual, theoretical, or methodological approaches. While scientometrics analyses are well suited to obtain high-level insights into a research domain in terms of its structure, patterns, and evolution, the existing scientometric techniques are ill-suited to detect research gaps, recent patterns, or research directions related to specific frameworks or approaches. Other review types, such as critical reviews, meta-analyses, or systematic reviews, are much more amenable to these goals (see ). Several such narrative reviews have already been published. Hence, the current work should be seen as complementary to those, aiming to achieve different aims. Finally, further work can be directed to better understanding the development of the risk communication research domain. Example research questions in this line of work can include, for instance, how risk communication has impacted the disciplines it is associated with; what relationships between risk communication and anthropology exist and how knowledge of the latter may be used to advance the former especially when dealing with non-Western societies; to what extent and how risk communication research has influenced political science research and actual political decision making processes; and to what extent risk communication research has helped to expand risk research or establish it as an independent domain of scientific activity. Narrative reviews and other research methods are better suited for such more in-depth questions about the risk communication research domain than the scientometric analyses presented here. Nevertheless, the results of may serve as a basis for directing such future research. 4.3. Limitations As in any study, it is important to be aware of the limitations of the presented work. First, the analysis is conditional to the search strategy described in . While a title-based search strategy is widely applied in broad research domains to ensure relevance of the identified documents, using other search terms; searching in title, abstract, and keywords; or using a database other than WOSCC may affect the articles which are found and hence the results. Finally, the language restriction to English language publications may also induce some blind spots in the analysis. It is noted here that using other terms such as ‘crisis communication’, ‘emergency communication’, or ‘disaster communication’ will almost certainly lead to detecting other patterns and trends. While these themes clearly have a close relationship to risk communication, as evidenced, e.g., by the results of the terms analysis or the co-citation clusters , the choice has been made in this article to restrict the search to ‘risk communication’. This is done for two reasons. First, including other terms blurs the scope of the domain of research which is intended to be covered, as it opens questions as to why then for instance ‘hazard communication’ or ‘safety communication’ are not included. The authors believe that a clearly delineated focus on risk communication is preferable from a conceptual and methodological point of view. The second reason relates to the meaning of the risk concept as it contrasts to other related terms such as those mentioned above. While there is no scientific agreement about what exactly ‘risk’ means, it is broadly agreed that this carries the notion of possible or uncertain future events . Thus, it may be deduced that risk communication focuses primarily on informing and interacting about events which have not yet happened. In contrast, crisis, emergency, or disaster communication is more focused on providing information about events which have already occurred or are ongoing . While this delineation is not exact due to linguistic ambiguities, risk communication for example relates more to stakeholder processes in preparedness planning for natural or technological disasters, while crisis communication would focus on what information to provide, when, and how, to affected people in an ongoing disaster. Follow-up research may explore the domains of research covered by the other mentioned search terms, from which more conclusive statements about their specific relationships can be made. In the analyses involving temporal evolutions of geographical research productivity , active scientific categories , narrative patterns , and co-citation clusters , the average publication year is used as a metric. While averages may hide significant information about the shape of the distribution, e.g., its variance or skewness, averages are commonly used in scientometric research to obtain high-level insights into the development of a research domain . A common criticism of scientometric analyses is the use of citation metrics such as total number of citations for determining impact and detecting patterns. As citations need time to accumulate, the reliance on number of citations may cause some important trends to be missed, especially in more recent research. This can be seen, e.g., in , where some scientific categories with more recent average years of publication do not have very high average citation scores. This may be because of the actual lower impact in the academic community, but the measures are also confounded by the shorter time of citations to accumulate to this research. Furthermore, using citations as a proxy for significance of research contributions is controversial , e.g., because of the potential for manipulation and lack of consideration of why an article was cited. Consequently, the presented scientometric analysis should not be seen as an endorsement of the correctness, value, or effectiveness of the highlighted risk communication research works. Instead, the analyses should rather be understood as descriptive of the development of the field and the countries, scientific categories, journals, terms, and references which have jointly forged and shaped the research domain to what it currently is. The analysis of research outputs in the risk communication research domain in shows a fast accelerating growth, especially over the last two decades. While it is tempting to conclude that risk communication has become a more popular research topic, this development should be understood in light of general trends in academic publishing. It has been shown that publication rates have increased sharply across the entire scientific enterprise , and similar surges in publication rates have been observed in other risk-related scientometric analyses . Hence, it is not entirely clear whether the increased risk communication research outputs are indeed indicative of its relative increased academic significance, whether these are the result of the increased relevance in societal governance or medical contexts, or whether the trends are caused by internal dynamics of academia. Focusing on the geographical distribution of the research outputs shown in , it is very clear that, for the time being, the domain is very strongly dominated by research originating from Western countries, with the United States of America being by far the largest contributor. Only some Western European countries also have significantly contributed to the domain, with other geographical regions displaying a much lower research output. The research impact shows a similar picture. There are, however, some signs that risk communication research is also gaining importance in non-Western countries, with research originating from, e.g., China, South Korea, and Brazil, being relatively sizeable, especially more recently. This may indicate that the Western dominance may decrease in the future and that new perspectives may enter the research domain. The relative dearth of risk communication research in South American, Eastern European, African, Middle Eastern, Asian, and Oceanian countries/regions may be a reflection of the governance structures of those societies, because governance approaches and mechanisms are necessarily embedded in cultural, organizational, and political contexts . In this context, it is noteworthy that risk communication processes, especially when related to societal governance, have been strongly linked to deliberative processes and hence often assume certain forms of democratic societies with a large role for public engagement and participation . Nevertheless, informing the public in disaster or crisis situations, e.g., related to public health or natural hazards, is likely important irrespective of the political governance style. The activity in the broad co-citation clusters of and the research activity across scientific categories of indicates that risk communication research follows general societal concerns: while health, medicine, and environmental issues are of continuous relevance, concerns about specific technologies such as nuclear power are more transitory. The recent focus on meteorology and atmospheric sciences and geosciences furthermore indicate a broad trend in increased focus on effects of climate change and natural disasters. These broad trends are also found in risk perception research , which is closely linked with risk communication as these are commonly discussed together, especially in a societal risk governance context . The scientific category analysis of and the journals’ distribution and intellectual basis analysis of show that the risk communication body of knowledge is interdisciplinary. Typically, research relies on social science knowledge as a base field, which is linked to more domain-specific knowledge related to medical, environmental, technical, or physical hazards. The results of the co-citation analysis and research fronts described in show that applied research in risk communication appears to follow large societal trends and topics of societal concern, including public health, nuclear power, epidemics, natural disasters, and food regulation. The analysis of narrative patterns in shows that there are two major clusters in terms: one related to societal risk governance and one addressing medical risk communication. These address largely different problems, in that the former focuses on communication between different societal stakeholders with possibly conflicting value systems and objectives, while the latter is concerned with interpersonal communication in a trust relationship between medical practitioner and patient (or family). The term analysis of , and and show that these clusters are mostly disjoint. The results of the co-citation cluster results of also show that medicine-related risk communication knowledge remains mostly in that broad subdomain of the research field, while application-specific knowledge related to other societal risks also is rather contained, as can be judged from the mostly very high silhouette values. Finally, in the analysis of journal distribution in , it is noteworthy that Risk Analysis and Journal of Risk Research , which have been identified as core journals in risk and safety research , concentrate a large body of work on risk communication, while also having a significant scientific impact in this domain. This can be seen to support arguments that risk research is not merely an interdisciplinary or transdisciplinary research activity, but that it has its own foundational basis of concepts, theories, models, and approaches, and that it hence could be seen as a research domain in its own right . As noted by one of the reviewers of an earlier version of this manuscript, the answers found to the questions stated in the introduction are in line with what knowledgeable domain experts would expect. Through a data-driven approach, the results obtained in this work substantiate these intuitive expert insights, providing evidence for the stated research questions, while raising new questions and research directions, as outlined next. Based on the results of as discussed in , a number of avenues for future work can be identified, either to further develop the research domain of risk communication itself, or to better understand how it has developed as a domain of scientific activity. Considering that risks are mediated and socially conveyed differently across varying cultural traditions , the lack of risk communication research in non-Western societies may result in a culturally biased approach. Hence, available theories, models, or conceptual frameworks for risk communication may need modification or elaboration to account for different social traditions, world views, or knowledge systems. More future research in non-Western countries may therefore lead to new fundamental insights and applications in the research domain. Based on the findings that applied risk communication research focuses on issues of contemporary societal importance, there are various new directions for future risk communication research. Major global risks are one important line of work. Judging from The Global Risks Report 2020 and considering that perceptions of importance of societal risks are usually mostly driven by the severity of potential impacts , such research could focus on climate action failure, weapons of mass destruction, biodiversity loss, extreme weather, water crises, information infrastructure breakdown, natural disasters, cyberattacks, human-made environmental disasters, and infectious diseases. Other topics for risk communication may concern new technological developments or consumer products. Risk communication research may be especially relevant where such technologies lead to concerns about human health or safety, in particular in contexts where uncertainty and ambiguity are societally important dimensions of risk. One example of this is the human health concerns related to the adoption of 5G wireless communication technology , about which conspiracy theories circulate on social media platforms, linking 5G technology to the COVID-19 pandemic . Another example may be the industrial developments towards autonomous cars and maritime autonomous surface ships. Risk perceptions may be important factors in certain consumer or societal stakeholder groups to oppose the adoption of such new technologies . Risk communication research can help to inform and interact with the public for such new developments. Based on the finding that risk communication research related to societal risk governance issues and research in medical contexts are as yet largely separated areas of work, it may be fruitful for future research to identify links between certain themes within these subdomains. Such knowledge exchange can, for instance, lead to new conceptual, theoretical, or methodological approaches. While scientometrics analyses are well suited to obtain high-level insights into a research domain in terms of its structure, patterns, and evolution, the existing scientometric techniques are ill-suited to detect research gaps, recent patterns, or research directions related to specific frameworks or approaches. Other review types, such as critical reviews, meta-analyses, or systematic reviews, are much more amenable to these goals (see ). Several such narrative reviews have already been published. Hence, the current work should be seen as complementary to those, aiming to achieve different aims. Finally, further work can be directed to better understanding the development of the risk communication research domain. Example research questions in this line of work can include, for instance, how risk communication has impacted the disciplines it is associated with; what relationships between risk communication and anthropology exist and how knowledge of the latter may be used to advance the former especially when dealing with non-Western societies; to what extent and how risk communication research has influenced political science research and actual political decision making processes; and to what extent risk communication research has helped to expand risk research or establish it as an independent domain of scientific activity. Narrative reviews and other research methods are better suited for such more in-depth questions about the risk communication research domain than the scientometric analyses presented here. Nevertheless, the results of may serve as a basis for directing such future research. As in any study, it is important to be aware of the limitations of the presented work. First, the analysis is conditional to the search strategy described in . While a title-based search strategy is widely applied in broad research domains to ensure relevance of the identified documents, using other search terms; searching in title, abstract, and keywords; or using a database other than WOSCC may affect the articles which are found and hence the results. Finally, the language restriction to English language publications may also induce some blind spots in the analysis. It is noted here that using other terms such as ‘crisis communication’, ‘emergency communication’, or ‘disaster communication’ will almost certainly lead to detecting other patterns and trends. While these themes clearly have a close relationship to risk communication, as evidenced, e.g., by the results of the terms analysis or the co-citation clusters , the choice has been made in this article to restrict the search to ‘risk communication’. This is done for two reasons. First, including other terms blurs the scope of the domain of research which is intended to be covered, as it opens questions as to why then for instance ‘hazard communication’ or ‘safety communication’ are not included. The authors believe that a clearly delineated focus on risk communication is preferable from a conceptual and methodological point of view. The second reason relates to the meaning of the risk concept as it contrasts to other related terms such as those mentioned above. While there is no scientific agreement about what exactly ‘risk’ means, it is broadly agreed that this carries the notion of possible or uncertain future events . Thus, it may be deduced that risk communication focuses primarily on informing and interacting about events which have not yet happened. In contrast, crisis, emergency, or disaster communication is more focused on providing information about events which have already occurred or are ongoing . While this delineation is not exact due to linguistic ambiguities, risk communication for example relates more to stakeholder processes in preparedness planning for natural or technological disasters, while crisis communication would focus on what information to provide, when, and how, to affected people in an ongoing disaster. Follow-up research may explore the domains of research covered by the other mentioned search terms, from which more conclusive statements about their specific relationships can be made. In the analyses involving temporal evolutions of geographical research productivity , active scientific categories , narrative patterns , and co-citation clusters , the average publication year is used as a metric. While averages may hide significant information about the shape of the distribution, e.g., its variance or skewness, averages are commonly used in scientometric research to obtain high-level insights into the development of a research domain . A common criticism of scientometric analyses is the use of citation metrics such as total number of citations for determining impact and detecting patterns. As citations need time to accumulate, the reliance on number of citations may cause some important trends to be missed, especially in more recent research. This can be seen, e.g., in , where some scientific categories with more recent average years of publication do not have very high average citation scores. This may be because of the actual lower impact in the academic community, but the measures are also confounded by the shorter time of citations to accumulate to this research. Furthermore, using citations as a proxy for significance of research contributions is controversial , e.g., because of the potential for manipulation and lack of consideration of why an article was cited. Consequently, the presented scientometric analysis should not be seen as an endorsement of the correctness, value, or effectiveness of the highlighted risk communication research works. Instead, the analyses should rather be understood as descriptive of the development of the field and the countries, scientific categories, journals, terms, and references which have jointly forged and shaped the research domain to what it currently is. In this work, a scientometric analysis of the risk communication research literature has been presented, spanning the period from 1985 to 2019. Various scientometric methods and visualization tools are applied to determine temporal trends and geographical patterns, contributing scientific categories and domains, the distribution of contributing journals and the intellectual basis of the domain, narrative patterns, and the evolution of the research domain using co-citation clusters. The analyses provide unprecedented insights into the structure, patterns, and developments of the risk communication domain, leading to several avenues of future research. The following main key conclusions are drawn: (i) Risk communication research has grown exponentially, especially with a very significant increase since the early 2000s. (ii) The domain is dominated by Western science, primarily from the USA and Western European countries, with non-Western research however recently emerging. (iii) The research domain is highly interdisciplinary, where typically knowledge from ‘Psychology and Social Sciences’ is combined with application-specific knowledge, with the domains ‘ Biology and Medicine ’, and ‘ Ecology and Environmental Science and Technology ’ being the most prominent. (iv) The most important journals in the field are Risk Analysis and Journal of Risk Research , which may suggest that risk research can been seen as a scientific domain in its own right. (v) There are two main, largely disconnected, narrative clusters in risk communication research. The first cluster is that of communication between medical practitioners and patients, and the other cluster concerns stakeholder communication for societal risk governance. The dominant narratives in the former concern interventions, decision making, and various medical conditions, whereas the latter focuses on societal risks such as public health, food, floods, or disease outbreaks. (vi) Risk communication research originates from a practice-oriented need to communicate regarding industrial (environmental) contamination and public health. Most subsequent research clusters address particular medical issues or societal concerns, including nuclear power, epidemics, or natural disasters. Apart from such application-oriented work, there are also some clusters that address risk communication models or theories or that study risk communication effectiveness. Many clusters are quite disjoint. This indicates that knowledge exchange between application domains is not very significant, which may therefore be a fruitful direction for future scholarship. Apart from providing insights into the structure and evolution of the research domain and leading to the formulation of several avenues for future research, the results are considered particularly useful for early career researchers who are relatively new to the very extensive domain of risk communication research. They can also assist experienced academics in navigating the fast-increasing volume of research, either for teaching or research purposes. The results can also be useful for journal editors, to position their journal in the wider body of scientific work or to identify hot topics, for instance, for deciding on opening a special journal issue on a certain topic. |
Questionnaire survey and analysis of drug clinical research implementation capabilities of breast cancer treatment departments in Chinese hospitals | 052a26d6-1036-4687-8ec5-81d08d263653 | 11283010 | Internal Medicine[mh] | Introduction Breast cancer is currently the leading cause of malignant tumors worldwide . In China, the incidence and mortality rates of breast cancer rank first and fourth, respectively, among female malignant tumors . In recent years, the five-year survival rate of patients with breast cancer has increased substantially , mainly due to the promotion of early cancer screening and diagnosis, as well as the application of many new antitumor drugs, such as cyclin-dependent kinase 4/6 inhibitors (CDK4/6is) and antibody‒drug conjugates (ADCs), which have benefited from the rapid development of clinical research . From 2009 to 2018, 1493 cancer drug trials were initiated in China, and the annual number of initiated clinical trials increased over time, with an average annual growth rate of 33 % . These growth trends illustrate the increasing capability of cancer drug research and development achieved in mainland China over the decade since 2009. Moreover, the annual participation rate in clinical trials for new antitumor drugs in China has also increased substantially, with a 15.7 % average annual increase from 2011 to 2021 . China's self-developed drugs have been on the market continuously in recent years . In the field of breast cancer, many new drugs have been tested in clinical trials by international pharmaceutical companies , and several new drugs, including pyrotinib and dalpiciclib , have been independently developed by China. Statistically, breast cancer ranks fourth in the number of registered trials of all cancer types, after solid tumors, non-small cell lung cancer (NSCLC) and lymphoma, accounting for 9 % of the total . Taken together, these findings indicate that breast cancer drug research and development play important roles in the field of new drug creation and production. However, there is still a gap in China's ability to participate in new drug research compared to that of European and American countries. Among the investigational new drug (IND) trials, the country with the highest participation rate among all the trials was the United States, which participated in 65.7 % of all the registered new drug trials . However, China ranks fifth, with participation in only 8.8 %. New drugs were marketed later in China than in foreign countries . In addition, from 2009 to 2018, 123 clinical trial units were involved as the leading site for cancer drug trials in mainland China, and the largest numbers of leading units were located in east China, followed by north China, whereas the smallest numbers were in northwest China and southwest China . Uneven regional distributions still persist. Moreover, the lack of simplicity in the processes related to preparation for clinical research implementation remains unresolved . Thus, it is vital to fully understand the current status of the capability of clinical research on breast cancer drugs in China to further improve its development. Nevertheless, most of the related studies and statistics have focused primarily on the quantity and characteristics of clinical research projects in China. Little is known about the capacity of clinical research implementation in breast cancer treatment departments, and thus, it is urgently important to launch appropriate studies to understand the current situation. Our study aimed to provide an overview of the status of clinical research implementation in breast cancer treatment departments in China, analyze the differences in the clinical research capacity of breast cancer departments in different regions and hospital classifications and identify the problems and challenges faced by current development, hoping to provide future directions for improving the clinical research capacity of breast cancer in China. Material and methods 2.1 Data collection This was a department-based cross-sectional study conducted in the form of electronic questionnaires in China from 7th August to 31 st August 2023. This study was initiated by the breast cancer expert committee of the National Cancer Center (NCC). The questionnaire was designed on the Wenjuanxing platform. Our study was conducted among the first batch of breast cancer standardized diagnosis and treatment quality control pilot centers, which includes 200 hospitals selected by the NCC on the basis of the level of breast cancer diagnosis and treatment, surgery volume and other factors. Pilot hospitals included need to have at least 2 years of working experience in breast cancer diagnosis and treatment, with a number of 10 or more beds for breast cancer treatment, and a volume of more than 200 breast cancer surgeries per year. Besides, complete set-up of pathology, radiographic, radiotherapy and other ancillary departments are necessary. The centers cover 30 provinces (autonomous regions and municipalities), and all of them are tertiary-level general hospitals or cancer hospitals. These hospitals are good representatives of the provincial-level and prefectural-municipal-level hospitals engaged in the treatment of breast cancer in China. The departments included were required to be clinical departments conducting drug clinical research. 2.2 Group variance analysis The provinces are divided into east, central, west and northeast regions according to the four main geographic regions defined by the National Bureau of Statistics. General hospitals and specialized cancer hospitals were divided according to the clinical range. The departments were classified as departments of medical oncology and surgical oncology, as well as other departments, including departments of radiotherapy and breast cancer treatment centers. Comparative analysis of different departments was only conducted between departments of medical oncology and surgical oncology. 2.3 Questionnaire design and completion The authors designed the questionnaire based on the specific steps, current realities and potential difficulties in clinical research conduction in China. The questionnaire was completed by the head or secretary of the department. Duplicate questionnaires were not allowed for the same account. The questionnaire covered six aspects, namely, clinical research team-building, patient service and management, ethics operation, clinical research process implementation efficiency, clinical research participation and implementation experience, and the needs and problems encountered during clinical research implementation. The questions were mainly multiple choices and included one open-ended question. 2.4 Statistical analysis The capability and demands of clinical research implementation of breast cancer treatment departments were based on the descriptions of the questionnaire results. If more than one questionnaire was received from the same department at the same hospital, the optimal option was chosen according to the actual clinical performance estimated by experts. As ethics operations are usually hospital-based, ethics operations and clinical research process implementation efficiency were described and analyzed from 122 hospitals. The remaining questions, which mainly depended on departmental conditions, were analyzed from 127 departments. Subgroup difference analysis was performed using the chi-square test and Fisher's exact test. All p values were derived from two-sided tests, and the results were considered statistically significant at a p < 0.05. The open-ended question was described in the researcher's summary. The data were analyzed with SPSS for Windows, version 26.0 (SPSS Inc). Data collection This was a department-based cross-sectional study conducted in the form of electronic questionnaires in China from 7th August to 31 st August 2023. This study was initiated by the breast cancer expert committee of the National Cancer Center (NCC). The questionnaire was designed on the Wenjuanxing platform. Our study was conducted among the first batch of breast cancer standardized diagnosis and treatment quality control pilot centers, which includes 200 hospitals selected by the NCC on the basis of the level of breast cancer diagnosis and treatment, surgery volume and other factors. Pilot hospitals included need to have at least 2 years of working experience in breast cancer diagnosis and treatment, with a number of 10 or more beds for breast cancer treatment, and a volume of more than 200 breast cancer surgeries per year. Besides, complete set-up of pathology, radiographic, radiotherapy and other ancillary departments are necessary. The centers cover 30 provinces (autonomous regions and municipalities), and all of them are tertiary-level general hospitals or cancer hospitals. These hospitals are good representatives of the provincial-level and prefectural-municipal-level hospitals engaged in the treatment of breast cancer in China. The departments included were required to be clinical departments conducting drug clinical research. Group variance analysis The provinces are divided into east, central, west and northeast regions according to the four main geographic regions defined by the National Bureau of Statistics. General hospitals and specialized cancer hospitals were divided according to the clinical range. The departments were classified as departments of medical oncology and surgical oncology, as well as other departments, including departments of radiotherapy and breast cancer treatment centers. Comparative analysis of different departments was only conducted between departments of medical oncology and surgical oncology. Questionnaire design and completion The authors designed the questionnaire based on the specific steps, current realities and potential difficulties in clinical research conduction in China. The questionnaire was completed by the head or secretary of the department. Duplicate questionnaires were not allowed for the same account. The questionnaire covered six aspects, namely, clinical research team-building, patient service and management, ethics operation, clinical research process implementation efficiency, clinical research participation and implementation experience, and the needs and problems encountered during clinical research implementation. The questions were mainly multiple choices and included one open-ended question. Statistical analysis The capability and demands of clinical research implementation of breast cancer treatment departments were based on the descriptions of the questionnaire results. If more than one questionnaire was received from the same department at the same hospital, the optimal option was chosen according to the actual clinical performance estimated by experts. As ethics operations are usually hospital-based, ethics operations and clinical research process implementation efficiency were described and analyzed from 122 hospitals. The remaining questions, which mainly depended on departmental conditions, were analyzed from 127 departments. Subgroup difference analysis was performed using the chi-square test and Fisher's exact test. All p values were derived from two-sided tests, and the results were considered statistically significant at a p < 0.05. The open-ended question was described in the researcher's summary. The data were analyzed with SPSS for Windows, version 26.0 (SPSS Inc). Results 3.1 Survey characteristics Questionnaires were distributed among 200 hospitals, 122 of which ultimately participated, for a response rate of 61 %. A total of 153 questionnaires were collected as of 31 August 2023, 22 of which were duplicated in the same department at the same hospital and 4 of which were completed by nonclinical department (Department of Pathology, Quality Control Department, Functional Department and Department of Interventional and Ultrasound Medicine). A total of 127 questionnaires were ultimately included in the analysis, and all the questionnaires met the inclusion criteria. Among all participating hospitals, 21 (17.2 %) were specialized cancer hospitals, and 101 (82.8 %) were general hospitals. Departments of surgical oncology accounted for 79.5 % (101/127), departments of medical oncology accounted for 15.0 % (19/127), and the remaining 5.5 % (7/127) were other departments (2 radiotherapy departments and 5 breast cancer treatment centers). Participating hospitals were distributed in 27 provinces, autonomous regions, and municipalities directly under the central government, with the largest number of hospitals and departments located in east China [48 (39.49 %)]. The geographical distribution of all the hospitals is shown in . 3.2 Drug clinical research implementation capacities of breast cancer treatment departments Our questionnaire was based on an objective questionnaire survey of the level of competence involved in implementing clinical drug research and solicited the needs of each department through both objective and open-ended questions. The specific detailed questionnaire used is shown in . 3.2.1 Clinical research team-building capabilities As shown in , in terms of professional staffing for clinical research, 95 (74.8 %) departments had more than 5 medical personnel with IND trials participation experience. Medical personnel involved in the clinical research of 118 (92.9 %) departments had received good clinical practice (GCP) training. A total of 98 (77.2 %) departments had more than 5 medical personnel with GCP certificates. Ninety-two (72.4 %) departments had clinical research nurse specialists. 120 (94.5 %) departments carried out more than one internal professional communication and training activity per month. 3.2.2 Patient service and management capabilities All departments carried out patient education activities each month, mostly 1–2 times per month, accounting for 73.2 % (93/127). A total of 118 (92.9 %) departments carried out regular out-of-hospital patient follow-up visits. In terms of the electronic platform application, 79 (62.2 %) departments had established specialized disease databases, and 85 (66.9 %) departments were able to use patient follow-up electronic platforms. The details are shown in . 3.2.3 Ethics operation capability In terms of the development of the ethics review process from a hospital perspective, 89.3 % (109/122) had undergone centralized institutional review board (IRB) review, 77.9 % (95/122) carried out ethics previews, and approximately half of the hospitals participated in regional ethics mutual recognition, accounting for 58.2 % (71/122). Regarding the efficiency of ethics review, the dates of ethics meetings of 47.5 % (58/122) of the hospitals were irregular, and the remaining hospitals were able to hold monthly ethics meetings, mostly once per month, accounting for 40.2 % (49/122). A total of 79.5 % (97/122) of the hospitals held expedited ethics meetings according to demand. In addition, 76.2 % (93/122) of the hospitals had designated staff in charge of ethics document reception and the ethics process interface. The details are shown in . 3.2.4 Situation of clinical research process implementation efficiency More than half of the hospitals [57.4 % (70/122)] took an average of 2–4 weeks from the submission of project review documents to project approval. A total of 57.4 % (70/122) of the hospitals took an average of 2–4 weeks for the project contract signature, and less than one-third of the hospitals [22.1 % (27/122)] took 1 week or less for this process. After the ethics meeting, 43.4 % (53/122) of the hospitals took 2–4 weeks to obtain ethics approval. The average time taken to sign documents about the management of human genetics resources was approximately 2–4 weeks in 56.6 % (69/122) of the hospitals. At the research completion stage, more than half of the hospitals [62.3 % (76/122)] spent 2–4 weeks from the time they submitted a completion request until they finally completed the process. The details are shown in . 3.2.5 Clinical research participation and implementation experience The majority of departments had initiated 2 or fewer IND trials (72.4 %) or investigator-initiated trials (IITs) (80.3 %) in the past year. The number of departments participating in clinical trials was slightly greater than that involved in initiation. In terms of patient enrollment, the majority of departments (70.1 %) had enrolled 50 or fewer patients in breast cancer-related IND trials in the past year, and only 7.1 % had enrolled 200 or more patients. Overall, 72.4 % of the hospitals had a patient dropout rate of less than 5 % in the past year. In terms of clinical trial audits, approximately half of the departments (47.2 %) had received 1–2 audits in the past year. The details are shown in . 3.3 Differences in clinical research implementation among regions, departments and hospital classifications To further analyze the differences in clinical research implementation capabilities, we conducted comparisons among different regions, departments, and hospital types. The central region had the largest number of departments with patient follow-up platform applications, and the western region had the smallest number (northeast vs. east vs. west vs. central: 84.6 % vs. 64.0 % vs. 51.4 % vs. 85.2 %, p = 0.017). The frequency of ethics meetings in the northeast region was relatively regular, with 75 % of the hospitals conducted more than once ethics meetings per month. However, more than half of the hospitals in the west and central regions had irregular frequencies of ethics meetings. A total of 38.5 % of departments in the northeast region enrolled more than 200 patients in breast cancer IND trials, which was the highest among all regions . For the differences between different departments, 120 questionnaires from department of medical and surgical oncology were included. all medical oncology staff had more than 5 personnel with GCP certificates (0, 1–5 people, more than 5 people had GCP certificates in medical oncology vs. surgical oncology: 0 %, 0 %, 100 % vs. 3.0 %, 24.7 %, 72.3 %, p = 0.018). In addition, departments of medical oncology processed a greater number of initiated and participated clinical studies than surgical oncology departments . Among the different hospital classifications, ethics meetings were held more frequently and steadily in specialized cancer hospitals than in general hospitals (not fixed, once/month, more than 2 times/month at specialized cancer hospitals vs. general hospitals: 23.8 %, 42.9 %, 33.3 % vs. 52.5 %, 39.6 %, 7.9 %). More specialized cancer hospitals initiated and participated in IITs and IND trials than general hospitals. There were far more patients enrolled in RCTs in specialized cancer hospitals than in general hospitals (within 50, 50–200, or more than 200 patients in specialized cancer hospitals vs. general hospitals: 39.1 %, 52.2 %, and 8.7 % vs. 76.9 %, 16.4 %, and 6.7 %, respectively; p = 0.001) . 3.4 Difficulties and demands of the implementation of clinical trials We also investigated the needs and difficulties encountered by departments during the implementation of clinical research. Most of the departments were strongly willing to undertake clinical research (98.4 %), and there was a demand for clinical research on different types of drugs, including small molecular targeted drugs (97.6 %), biological products (91.3 %) and cytotoxic drugs (72.4 %). Among the problems encountered in the initiation of clinical research, the most difficult were limitations in the quality and quantity of the projects, a lack of funding, and inefficiency in research execution and implementation, which accounted for 60.6 %, 55.9 %, and 47.2 %, respectively. In the process of conducting the study, the most common problem was patient recruitment, accounting for 79.5 %. According to the difference analysis, general hospitals encountered more limitations in terms of the quality and quantity of clinical research projects. The main needs were focused on trial recommendation and leading, which accounted for 83.5 %, followed by professional guidance for team personnel, which accounted for 78.7 %. The details are shown in and . To gain a broader understanding of the actual needs, we designed an open-ended question to reflect the subjective ideas and suggestions of each department in the development of clinical research. Since there was no fixed answer format, the main needs of each department were summarized as follows: First, the NCC was expected to establish a national clinical research recruitment and data sharing platform to improve clinical research recruitment and regional participation among breast cancer departments. Second, the frequency of professional training for regional breast cancer departments urgently needs to increase. Third, funding support and targeted assistance need to be offered and supported by substantive policies. Survey characteristics Questionnaires were distributed among 200 hospitals, 122 of which ultimately participated, for a response rate of 61 %. A total of 153 questionnaires were collected as of 31 August 2023, 22 of which were duplicated in the same department at the same hospital and 4 of which were completed by nonclinical department (Department of Pathology, Quality Control Department, Functional Department and Department of Interventional and Ultrasound Medicine). A total of 127 questionnaires were ultimately included in the analysis, and all the questionnaires met the inclusion criteria. Among all participating hospitals, 21 (17.2 %) were specialized cancer hospitals, and 101 (82.8 %) were general hospitals. Departments of surgical oncology accounted for 79.5 % (101/127), departments of medical oncology accounted for 15.0 % (19/127), and the remaining 5.5 % (7/127) were other departments (2 radiotherapy departments and 5 breast cancer treatment centers). Participating hospitals were distributed in 27 provinces, autonomous regions, and municipalities directly under the central government, with the largest number of hospitals and departments located in east China [48 (39.49 %)]. The geographical distribution of all the hospitals is shown in . Drug clinical research implementation capacities of breast cancer treatment departments Our questionnaire was based on an objective questionnaire survey of the level of competence involved in implementing clinical drug research and solicited the needs of each department through both objective and open-ended questions. The specific detailed questionnaire used is shown in . 3.2.1 Clinical research team-building capabilities As shown in , in terms of professional staffing for clinical research, 95 (74.8 %) departments had more than 5 medical personnel with IND trials participation experience. Medical personnel involved in the clinical research of 118 (92.9 %) departments had received good clinical practice (GCP) training. A total of 98 (77.2 %) departments had more than 5 medical personnel with GCP certificates. Ninety-two (72.4 %) departments had clinical research nurse specialists. 120 (94.5 %) departments carried out more than one internal professional communication and training activity per month. 3.2.2 Patient service and management capabilities All departments carried out patient education activities each month, mostly 1–2 times per month, accounting for 73.2 % (93/127). A total of 118 (92.9 %) departments carried out regular out-of-hospital patient follow-up visits. In terms of the electronic platform application, 79 (62.2 %) departments had established specialized disease databases, and 85 (66.9 %) departments were able to use patient follow-up electronic platforms. The details are shown in . 3.2.3 Ethics operation capability In terms of the development of the ethics review process from a hospital perspective, 89.3 % (109/122) had undergone centralized institutional review board (IRB) review, 77.9 % (95/122) carried out ethics previews, and approximately half of the hospitals participated in regional ethics mutual recognition, accounting for 58.2 % (71/122). Regarding the efficiency of ethics review, the dates of ethics meetings of 47.5 % (58/122) of the hospitals were irregular, and the remaining hospitals were able to hold monthly ethics meetings, mostly once per month, accounting for 40.2 % (49/122). A total of 79.5 % (97/122) of the hospitals held expedited ethics meetings according to demand. In addition, 76.2 % (93/122) of the hospitals had designated staff in charge of ethics document reception and the ethics process interface. The details are shown in . 3.2.4 Situation of clinical research process implementation efficiency More than half of the hospitals [57.4 % (70/122)] took an average of 2–4 weeks from the submission of project review documents to project approval. A total of 57.4 % (70/122) of the hospitals took an average of 2–4 weeks for the project contract signature, and less than one-third of the hospitals [22.1 % (27/122)] took 1 week or less for this process. After the ethics meeting, 43.4 % (53/122) of the hospitals took 2–4 weeks to obtain ethics approval. The average time taken to sign documents about the management of human genetics resources was approximately 2–4 weeks in 56.6 % (69/122) of the hospitals. At the research completion stage, more than half of the hospitals [62.3 % (76/122)] spent 2–4 weeks from the time they submitted a completion request until they finally completed the process. The details are shown in . 3.2.5 Clinical research participation and implementation experience The majority of departments had initiated 2 or fewer IND trials (72.4 %) or investigator-initiated trials (IITs) (80.3 %) in the past year. The number of departments participating in clinical trials was slightly greater than that involved in initiation. In terms of patient enrollment, the majority of departments (70.1 %) had enrolled 50 or fewer patients in breast cancer-related IND trials in the past year, and only 7.1 % had enrolled 200 or more patients. Overall, 72.4 % of the hospitals had a patient dropout rate of less than 5 % in the past year. In terms of clinical trial audits, approximately half of the departments (47.2 %) had received 1–2 audits in the past year. The details are shown in . Clinical research team-building capabilities As shown in , in terms of professional staffing for clinical research, 95 (74.8 %) departments had more than 5 medical personnel with IND trials participation experience. Medical personnel involved in the clinical research of 118 (92.9 %) departments had received good clinical practice (GCP) training. A total of 98 (77.2 %) departments had more than 5 medical personnel with GCP certificates. Ninety-two (72.4 %) departments had clinical research nurse specialists. 120 (94.5 %) departments carried out more than one internal professional communication and training activity per month. Patient service and management capabilities All departments carried out patient education activities each month, mostly 1–2 times per month, accounting for 73.2 % (93/127). A total of 118 (92.9 %) departments carried out regular out-of-hospital patient follow-up visits. In terms of the electronic platform application, 79 (62.2 %) departments had established specialized disease databases, and 85 (66.9 %) departments were able to use patient follow-up electronic platforms. The details are shown in . Ethics operation capability In terms of the development of the ethics review process from a hospital perspective, 89.3 % (109/122) had undergone centralized institutional review board (IRB) review, 77.9 % (95/122) carried out ethics previews, and approximately half of the hospitals participated in regional ethics mutual recognition, accounting for 58.2 % (71/122). Regarding the efficiency of ethics review, the dates of ethics meetings of 47.5 % (58/122) of the hospitals were irregular, and the remaining hospitals were able to hold monthly ethics meetings, mostly once per month, accounting for 40.2 % (49/122). A total of 79.5 % (97/122) of the hospitals held expedited ethics meetings according to demand. In addition, 76.2 % (93/122) of the hospitals had designated staff in charge of ethics document reception and the ethics process interface. The details are shown in . Situation of clinical research process implementation efficiency More than half of the hospitals [57.4 % (70/122)] took an average of 2–4 weeks from the submission of project review documents to project approval. A total of 57.4 % (70/122) of the hospitals took an average of 2–4 weeks for the project contract signature, and less than one-third of the hospitals [22.1 % (27/122)] took 1 week or less for this process. After the ethics meeting, 43.4 % (53/122) of the hospitals took 2–4 weeks to obtain ethics approval. The average time taken to sign documents about the management of human genetics resources was approximately 2–4 weeks in 56.6 % (69/122) of the hospitals. At the research completion stage, more than half of the hospitals [62.3 % (76/122)] spent 2–4 weeks from the time they submitted a completion request until they finally completed the process. The details are shown in . Clinical research participation and implementation experience The majority of departments had initiated 2 or fewer IND trials (72.4 %) or investigator-initiated trials (IITs) (80.3 %) in the past year. The number of departments participating in clinical trials was slightly greater than that involved in initiation. In terms of patient enrollment, the majority of departments (70.1 %) had enrolled 50 or fewer patients in breast cancer-related IND trials in the past year, and only 7.1 % had enrolled 200 or more patients. Overall, 72.4 % of the hospitals had a patient dropout rate of less than 5 % in the past year. In terms of clinical trial audits, approximately half of the departments (47.2 %) had received 1–2 audits in the past year. The details are shown in . Differences in clinical research implementation among regions, departments and hospital classifications To further analyze the differences in clinical research implementation capabilities, we conducted comparisons among different regions, departments, and hospital types. The central region had the largest number of departments with patient follow-up platform applications, and the western region had the smallest number (northeast vs. east vs. west vs. central: 84.6 % vs. 64.0 % vs. 51.4 % vs. 85.2 %, p = 0.017). The frequency of ethics meetings in the northeast region was relatively regular, with 75 % of the hospitals conducted more than once ethics meetings per month. However, more than half of the hospitals in the west and central regions had irregular frequencies of ethics meetings. A total of 38.5 % of departments in the northeast region enrolled more than 200 patients in breast cancer IND trials, which was the highest among all regions . For the differences between different departments, 120 questionnaires from department of medical and surgical oncology were included. all medical oncology staff had more than 5 personnel with GCP certificates (0, 1–5 people, more than 5 people had GCP certificates in medical oncology vs. surgical oncology: 0 %, 0 %, 100 % vs. 3.0 %, 24.7 %, 72.3 %, p = 0.018). In addition, departments of medical oncology processed a greater number of initiated and participated clinical studies than surgical oncology departments . Among the different hospital classifications, ethics meetings were held more frequently and steadily in specialized cancer hospitals than in general hospitals (not fixed, once/month, more than 2 times/month at specialized cancer hospitals vs. general hospitals: 23.8 %, 42.9 %, 33.3 % vs. 52.5 %, 39.6 %, 7.9 %). More specialized cancer hospitals initiated and participated in IITs and IND trials than general hospitals. There were far more patients enrolled in RCTs in specialized cancer hospitals than in general hospitals (within 50, 50–200, or more than 200 patients in specialized cancer hospitals vs. general hospitals: 39.1 %, 52.2 %, and 8.7 % vs. 76.9 %, 16.4 %, and 6.7 %, respectively; p = 0.001) . Difficulties and demands of the implementation of clinical trials We also investigated the needs and difficulties encountered by departments during the implementation of clinical research. Most of the departments were strongly willing to undertake clinical research (98.4 %), and there was a demand for clinical research on different types of drugs, including small molecular targeted drugs (97.6 %), biological products (91.3 %) and cytotoxic drugs (72.4 %). Among the problems encountered in the initiation of clinical research, the most difficult were limitations in the quality and quantity of the projects, a lack of funding, and inefficiency in research execution and implementation, which accounted for 60.6 %, 55.9 %, and 47.2 %, respectively. In the process of conducting the study, the most common problem was patient recruitment, accounting for 79.5 %. According to the difference analysis, general hospitals encountered more limitations in terms of the quality and quantity of clinical research projects. The main needs were focused on trial recommendation and leading, which accounted for 83.5 %, followed by professional guidance for team personnel, which accounted for 78.7 %. The details are shown in and . To gain a broader understanding of the actual needs, we designed an open-ended question to reflect the subjective ideas and suggestions of each department in the development of clinical research. Since there was no fixed answer format, the main needs of each department were summarized as follows: First, the NCC was expected to establish a national clinical research recruitment and data sharing platform to improve clinical research recruitment and regional participation among breast cancer departments. Second, the frequency of professional training for regional breast cancer departments urgently needs to increase. Third, funding support and targeted assistance need to be offered and supported by substantive policies. Discussion With the rapid development of clinical research and the continuous enhancement of innovation capabilities in China, the quality and implementation capability of clinical research have drawn widespread attention. Our study is one of the earliest known surveys focusing on the drug clinical research implementation capacities of breast cancer treatment departments in China. The involved hospitals basically covered the major provinces in mainland China. The questionnaire covered not only the current status of different departments in terms of team building, ethics procedures, and the conditions of previous research implementation experience but also the demands and main difficulties encountered at present and the differences among different geographical regions, departments and hospital classifications. We aimed to clarify the current status of clinical drug research in breast cancer treatment departments and provide targeted directions for future development. Our study reported that Chinese breast cancer treatment departments generally possess an appropriate level of competence in the implementation of drug clinical research, with a basically complete implementation process and, to a certain extent, joint participation from regional oncology treatment institutions. The medical oncology department presented greater research participation and initiation than the surgical oncology department. As our research was mainly focused on drug clinical research in breast cancer, it is not unexpected that departments of medical oncology have more experience than departments of surgical oncology in drug clinical research conduction due to the difference in their departmental functions. Moreover, specialized cancer hospitals are more experienced than general hospitals in conducting clinical research, which may be related to the characteristics of treatment at specialty hospitals. It is obvious that specialized cancer hospitals are more targeted in terms of patient population, which is more conducive to the inclusion of patients in clinical research. Besides, specialized cancer hospitals are equipped with more professional teams and auxiliary departments, which certainly means stronger multidisciplinary support. Professional training and communication as well as research recommendations and implementations are the most urgent needs for clinical research development. Clinical research has undergone rapid development in China in recent years. The annual number of initiated clinical trials, new drugs, and newly added leading clinical trial units sharply increased after 2016 . The number of new antitumor drugs approved is also increasing, and almost half of the cancer drug indications approved in China have improved overall survival . In addition, the annual number of drugs with clinically meaningful benefits from therapies approved in China according to the European Society for Medical Oncology-Magnitude of Clinical Benefit Scale (ESMO-MCBS) increased from 0 in 2005 to 6 in 2018 . All of China's early-phase clinical trials have also shown yearly growth; a total of 996 drugs were tested in phase 1 trials, and 1359 phase 1 trials of anticancer drugs were initiated in China from 2017 to 2021 . Analysis of the results of Food and Drug Administration (FDA) research in China and the United States from 2009 to July 2023 showed that the quality of clinical trials implemented in China has improved considerably annually, and clinical trials with no action indicated (NAI) have increased from 48 % to 85 %, suggesting that the results of such verification have been superior to those in the United States since 2016 . According to the clinical research implementation capability reflected in our findings, most department personnel have had experience in breast cancer clinical trial participation and have undergone standardized GCP training, indicating that the clinical research team was basically complete. Most departments are able to establish patient follow-up electronic platforms and carry out patient education and follow-up activities regularly. According to the needs section of our survey, most departments showed enthusiasm for actively participating in clinical research and a willingness to receive professional and standardized training. All of the above results showed that the quality and capability of clinical research implementation in China meet clinical requirements and have gained a certain degree of international recognition and industry competitiveness, which is inextricably linked to good conditions for conducting clinical research and a standardized clinical research system. Although the number of new drug approvals is increasing annually, Chinese research and development (R&D) of innovative antitumor drugs is still facing great challenges. Between 2005 and 2021, 66 % of drugs were developed by foreign companies . In addition to the basic research and clinical trial design capabilities that need to be further improved, the ability of clinical research to be implemented as the foundation and key part of the process of developing a new drug for clinical treatment is highly important. With the increase in interregional exchanges, disparities in clinical research due to geographical differences and uneven distributions of medical resources are gradually easing. However, in some aspects of clinical research, the west region is slightly less suitable than other regions. This may be related to differences in the distribution of healthcare resources due to overall national planning and persistent regional differences in economic levels. Improving the balance between equitable access to new drugs and the efficiency of pharmaceutical research and development is an important topic worthy of exploration by policy-makers. Moreover, overall and synchronous participation in registered cancer drug trials in China is much lower than that in the United States, European countries and Japan, especially for exploratory trials . This phenomenon is consistent with our research findings. In addition, our findings showed that the efficiency of drug clinical research implementation is equally severe, with most hospitals taking 2–4 weeks or more to complete part of the approval process and suffering an irregular frequency of ethics meetings. This condition imputes inconsistent review requirements, processes and forms as well as cumbersome processes. Moreover, the personnel of ethics committees are mostly part-time personnel, resulting in a mismatch between the ethical workload and staffing, which certainly increases the difficulty of ethics review implementation, indicating that the ethics approval process still needs to be further improved and simplified on the basis of ensuring strict review norms. Moreover, 58.02 % of the hospitals participated in regional ethics mutual recognition in our research, and the capacity for homogenization, such as regional ethics mutual recognition, still needs to be improved to circumvent the waste of resources and time involved in ethics review and further improve efficiency. All of the above conditions suggest that we still face many challenges in improving clinical research capacity. Our research reflects that more than half of the departments encountered limitations in the quality and quantity of their studies and a lack of project funding. Constant challenges have been encountered with respect to the development of quality control and management standards. In 2017, China formally proposed the implementation of register management for the qualification accreditation of clinical trial organizations. In recent years, the National Medical Products Administration (NMPA) has frequently issued various regulations, policies, documents and guiding principles and established the “ Measures for the supervision and inspection of drug clinical trials institution (for trial implementation) ” in Nov 2023 . Through the implementation of the clinical trial organization responsibility and principal investigator (PI) responsibility system, China has strengthened the sense of responsibility of all parties involved in clinical research, comprehensively promoted the training of full-time clinical researchers and the construction of professional teams, and further safeguarded the quality of clinical trials conducted. Moreover, this approach makes full use of China's national-level medical centers and regional-level medical centers and overall national planning settings to reasonably allocate high-quality medical research resources, promote the sharing of regional medical resources and conduct clinical research with regional characteristics. Moreover, it is beneficial to make full use of the role of electronic information dissemination and establish a national multicenter clinical trial registry and scientific research collaboration electronic platform to strengthen the communication of clinical research information and scientific achievement between the ethics committee of the leading unit and the participating institutions and to promote mutual recognition of ethics reviews and process simplification. There are some limitations in this study. First, although this study was initiated by the breast cancer expert committee of the NCC, it covered a limited number of hospitals, which were mainly provincial and prefectural medical centers. Few national and county-level oncology centers participated, and some representative hospitals were not involved in the questionnaire survey. Moreover, the sample sizes of the participating hospitals in different regions varied, and the participating departments were mainly surgical departments, which is due to the particular characteristics of breast cancer treatment. Thus, this difference may have weakened the results of difference analysis to a certain extent. Additionally, the questionnaires were completed by department leaders or secretaries, which basically reflects the real characteristics of the current status of clinical research, but there is unavoidable subjectivity. However, our questionnaire design did not involve precise numerical questions, and most of the questions were generalized options to avoid the lack of authenticity of the information provided by subjective completion. Finally, the questionnaire may not have comprehensively covered all the levels of clinical capabilities, and some of the possible problems and needs cannot have been truly reflected. Conclusions This questionnaire survey provided insights into the current situation and improvement potential of the drug clinical research implementation quality of breast cancer treatment departments. Our study indicated that breast cancer treatment departments in China basically process complete drug clinical research implementation and meet the needs of rapid development and drug innovation. However, there is still room for improvement in terms of implementation efficiency, quality and quantity of research and patient recruitment. Uneven development and needs in some respects between different regions, departments and hospital classifications still exist. Most departments still require professional training and communication, as well as the recommendation and implementation of clinical research. The above findings are expected to lead future development directions for clinical research on breast cancer drugs in China. This work was supported by the CAMS Innovation Fund for Medical Sciences ( CIFMS , Grant No. 2022-I2M-2-001 ). Not applicable. The data that support the findings of this study are available on request from the corresponding author. Bo Lan: Writing – review & editing, Writing – original draft, Methodology. Xuenan Peng: Writing – original draft, Visualization, Methodology, Formal analysis. Fei Ma: Validation, Supervision, Funding acquisition, Conceptualization. None. |
Expression of Pluripotency Factors OCT4 and LIN28 Correlates with Survival Outcome in Lung Adenocarcinoma | 28765732-ef26-4668-a88c-ca88771f11d8 | 11205930 | Anatomy[mh] | Adenocarcinoma of the lung (LADC) remains a major cause of cancer-related mortality worldwide despite recent therapeutic advantages. In the 2015 World Health Organization classification, invasive lung adenocarcinoma was further classified into different subtypes, with different prognoses such as EGFR and KRAS mutations, as well as Anaplastic lymphoma kinase (ALK) and ROS1 translocations, which are the most common molecular alterations detected in lung adenocarcinomas and form the basis for targeted therapies . Stem cells are either partially differentiated or they partially differentiated cells that can further differentiate into various other cell types and divide indefinitely to produce more of the same stem cell . These cells have unique properties of self-renewal and pluripotency . Stem cells continue to self-renew due to an autoregulated network of transcription factors, which inhibits differentiation and promotes proliferation . Dysregulation of these mechanisms can lead to premature differentiation and/or continuous self-renewal/proliferation of stem cells, which is a well-known hallmark of cancer progression . The LIN28 gene encodes an RNA-binding protein that governs the post-transcriptional regulation of gene expression; thus, it has a crucial role in tissue development . LIN28 controls stem cell self-renewal, thus influencing many cellular functions including cell growth, stem cell differentiation, metabolism and carcinogenesis. The LIN28 family of RNA-binding proteins consists of two highly conserved homologs LIN28A and LIN28B with similar functions . LIN28 binds to let-7 pre-microRNA, and it blocks the biogenesis of mature let-7 in mouse embryonic stem cells (ES) . Various studies link microRNA to critical oncogenic pathways such as the RAS pathways, Myc and JAK-STAT3. In particular, reduced let-7microRNA expression has been observed in many types of cancer, resulting in cancer progression and adverse patient prognosis . OCT4 (also known as POU5F1) is a protein that is encoded in humans by the POU5F1 gene, which is located on chromosome 6p21 . During embryonic development, the expression of OCT4 affects the differentiation of embryonic stem cells that are maintaining their capacity for self-renewal. Also, OCT4 expression is increased in germ cell and embryonic cell tumors, rendering OCT4 a molecular marker of germ cell tumors . OCT4 and LIN28 are transcription factors with a key role in pluripotency maintenance in mammalian ES and induced pluripotent stem cells (iPS), regulating, in this way, cancer progression . However, their role in lung adenocarcinoma has not yet been fully clarified. The aim of this study was to explore the role of pluripotency factors OCT4 and LIN28 (homolog A and B) in a cohort of surgically resected human lung adenocarcinomas to reveal the possible biomarkers for diagnosis and prognosis and the potential therapeutic targets for lung adenocarcinomas. Our study included 96 patients who underwent surgical resection for lung adenocarcinoma at the University Hospital of Patras between 2000 and 2009. All tumors were formalin-fixed, paraffin-embedded (FFPE). The hematoxylin and eosin (H&E)-stained slides of the specimens were reviewed by an expert pathologist (PB) to determine the histological subtype, grade and T- and N-stage of the tumor according to the revised 2015 World Health Organization (WHO) classification of Lung Tumors . A representative block was selected for each patient. Non-neoplastic lung parenchyma adjacent to the tumor was also present in most of the blocks (95%). Medical history and clinical outcomes were retrieved from the patients’ records from the Division of Oncology of the University Hospital of Patras. Overall survival was evaluated after an observation period of 5 years (60 months). This study was approved by the Bioethics & Research Committee of the University Hospital of Patras, Greece (approval code: 509 and approval date: 11 July 2019) in full compliance with the guidelines detailed in the Declaration of Helsinki, which provides the “ethical principles for medical research involving human subjects”. 2.1. Immunohistochemical Staining Serial 3 μm tissue sections were cut, mounted on poly-L lysine-coated slides and subjected to immunohistochemical staining. Briefly, the sections were initially dried for 24 h at 60 °C, deparaffinized in xylene and hydrated in gradient alcohol. The antigen was retrieved in Tris/EDTA buffer (pH 9) with a pressure antigen retrieval procedure for 12 min. Next, endogenous peroxidase was inactivated using a peroxidase-blocking solution (0.3% H 2 O 2 ) at room temperature for 10 min. The sections were then incubated with the primary antibodies. Information about the primary antibodies, as well as the positive and negative controls used for antibody validation, are shown in . Immunohistochemical signaling was detected with Dako EnVision polymer (Dako EnVision Mini Flex, Dako Omnis, Angilent Technology Inc., Santa Clara, CA, USA, GV823). Diaminobenzidine (Dako Omnis, Santa Clara, USA, GV823) was used as a chromogen, and Harris Hematoxylin was used for nuclear counterstaining. 2.2. Evaluation of the Immunohistochemical Staining The immunoreactivity was assessed by an expert pathologist (PB), who was blinded to the pathological and clinical characteristics of each case. The intensity and the distribution of positively stained cancer cells were evaluated as described below. The localization (nuclear and cytoplasmic) of the stains was also evaluated. The immunoreactivity was calculated with the following formula: The staining immunoreactivity was scored from 0% to 100% (at 5% as intervals) by calculating the proportion of positive tumor cells (more than 1000 cells were counted). The intensity of stained cells was assessed with a three-tiered scale. The overall score was calculated by multiplying the percentage of positive-stained cells by the intensity of the staining, ranging from 0 to 300. Other components of the tumor microenvironment, such as lymphocytes, macrophages and endothelial cells were also evaluated and scored as positive or negative based on the presence or absence of any staining. Microphotographs were obtained by a Lumenera INFINITY HD digital camera (Teledyne Lumenera Co, OTT, Canada) mounted on an Olympus BX41 microscope (Olympus Europa SE & Co, Hamburg, Germany). 2.3. Statistical Analysis 2.3.1. Associations of OCT4 and LIN28 with Clinicopathological Parameters and the Correlations between Proteins Statistical analysis was performed using the Statistical Package for Social Sciences version 25 (IBM Corp. Released 2017. IBM SPSS Statistics for Windows, Version 25.0. Armonk, NY, USA). The expression of the markers was associated with clinicopathological parameters. Categorical variables were evaluated with the Chi-square or Fisher exact tests. For ordinal or continuous variables, Kruskal–Wallis or Mann–Whitney tests were used for comparisons between groups. Correlations between the expressions of the proteins were performed using Spearman’s correlation test. 2.3.2. Survival Analysis Survival analysis was assessed with Kaplan–Meier plots, and the differences between groups were evaluated with the exact log-rank test. OS and DFS rates were calculated as the interval between the date of diagnosis and the date of death (or the last follow-up). Multivariable analysis was performed with Cox’s proportional hazard regression model. A p value < 0.05 was considered statistically significant. Serial 3 μm tissue sections were cut, mounted on poly-L lysine-coated slides and subjected to immunohistochemical staining. Briefly, the sections were initially dried for 24 h at 60 °C, deparaffinized in xylene and hydrated in gradient alcohol. The antigen was retrieved in Tris/EDTA buffer (pH 9) with a pressure antigen retrieval procedure for 12 min. Next, endogenous peroxidase was inactivated using a peroxidase-blocking solution (0.3% H 2 O 2 ) at room temperature for 10 min. The sections were then incubated with the primary antibodies. Information about the primary antibodies, as well as the positive and negative controls used for antibody validation, are shown in . Immunohistochemical signaling was detected with Dako EnVision polymer (Dako EnVision Mini Flex, Dako Omnis, Angilent Technology Inc., Santa Clara, CA, USA, GV823). Diaminobenzidine (Dako Omnis, Santa Clara, USA, GV823) was used as a chromogen, and Harris Hematoxylin was used for nuclear counterstaining. The immunoreactivity was assessed by an expert pathologist (PB), who was blinded to the pathological and clinical characteristics of each case. The intensity and the distribution of positively stained cancer cells were evaluated as described below. The localization (nuclear and cytoplasmic) of the stains was also evaluated. The immunoreactivity was calculated with the following formula: The staining immunoreactivity was scored from 0% to 100% (at 5% as intervals) by calculating the proportion of positive tumor cells (more than 1000 cells were counted). The intensity of stained cells was assessed with a three-tiered scale. The overall score was calculated by multiplying the percentage of positive-stained cells by the intensity of the staining, ranging from 0 to 300. Other components of the tumor microenvironment, such as lymphocytes, macrophages and endothelial cells were also evaluated and scored as positive or negative based on the presence or absence of any staining. Microphotographs were obtained by a Lumenera INFINITY HD digital camera (Teledyne Lumenera Co, OTT, Canada) mounted on an Olympus BX41 microscope (Olympus Europa SE & Co, Hamburg, Germany). 2.3.1. Associations of OCT4 and LIN28 with Clinicopathological Parameters and the Correlations between Proteins Statistical analysis was performed using the Statistical Package for Social Sciences version 25 (IBM Corp. Released 2017. IBM SPSS Statistics for Windows, Version 25.0. Armonk, NY, USA). The expression of the markers was associated with clinicopathological parameters. Categorical variables were evaluated with the Chi-square or Fisher exact tests. For ordinal or continuous variables, Kruskal–Wallis or Mann–Whitney tests were used for comparisons between groups. Correlations between the expressions of the proteins were performed using Spearman’s correlation test. 2.3.2. Survival Analysis Survival analysis was assessed with Kaplan–Meier plots, and the differences between groups were evaluated with the exact log-rank test. OS and DFS rates were calculated as the interval between the date of diagnosis and the date of death (or the last follow-up). Multivariable analysis was performed with Cox’s proportional hazard regression model. A p value < 0.05 was considered statistically significant. Statistical analysis was performed using the Statistical Package for Social Sciences version 25 (IBM Corp. Released 2017. IBM SPSS Statistics for Windows, Version 25.0. Armonk, NY, USA). The expression of the markers was associated with clinicopathological parameters. Categorical variables were evaluated with the Chi-square or Fisher exact tests. For ordinal or continuous variables, Kruskal–Wallis or Mann–Whitney tests were used for comparisons between groups. Correlations between the expressions of the proteins were performed using Spearman’s correlation test. Survival analysis was assessed with Kaplan–Meier plots, and the differences between groups were evaluated with the exact log-rank test. OS and DFS rates were calculated as the interval between the date of diagnosis and the date of death (or the last follow-up). Multivariable analysis was performed with Cox’s proportional hazard regression model. A p value < 0.05 was considered statistically significant. 3.1. Clinical, Demographic and Histopathological Data The patients’ characteristics are summarized in . Ninety-six (96) cases were included in this study. The median age of the patients was 65.5 years (range 39–84). Sixteen patients (16.7%) had undergone pneumonectomy, 68 (70.8%) lobectomy, 9 (9.3%) double lobectomy and 3 (3.1%) wedge excision. Two- and three-year survival outcomes were available in 88 patients, and five-year survival outcomes were available in 81 patients. 3.2. Expression of OCT4 in Lung Adenocarcinoma Positive OCT4 immunohistochemical staining was observed in the nuclei of the neoplastic cells. The epithelial cells of adjacent non neoplastic lung tissue, lymphocytes and stromal cells were negative for OCT4 . In 61/96 patients, a positive nuclear expression of OCT4 (63.5%) was noted. The immunohistochemical score of OCT4 nuclear expression ranged between 0 and 120 (mean = 4 ± 5) (±SD). The relationships between OCT4 immunohistochemical expression and the clinicopathological data of the patients is presented in . No significant correlations were observed between OCT4 expression and age ( p = 0.595), gender ( p = 0.939), histological subtype ( p = 0.673) and clinical stage ( p = 0.542). The immunohistochemical expression of OCT4 in patients with lung adenocarcinoma was associated with 2-, 3-, and 5-year OS rates. A higher nuclear expression of OCT4 was associated with improved 5-year OS rates ( p = 0.008). Patients with a higher expression of OCT4 had improved outcomes compared to patients with lower OCT4 expression levels . 3.3. Expression of LIN28A in Lung Adenocarcinomas Positive immunohistochemical expressions of LIN28A were observed only in the nuclei of malignant epithelial cells. Epithelial cells of adjacent non neoplastic lung tissue, lymphocytes and stromal cells were negative for LIN28A . In 62/96 patients (64.5%), positive LIN28A nuclear staining was observed, while no LIN28A immunohistochemical expression was observed in 34/96 patients (13.5%) . The immunohistochemical score of the positive nuclear immunohistochemical expression ranged between 0 and 75 (median 4 ± 6) (±SD). The relationship between the LIN28A immunohistochemical expression and the clinicopathological data of the patients is presented in . The immunohistochemical expression of LIN28A was associated with tumor stage and the 5-year survival outcome in patients with lung adenocarcinoma. Patients with metastatic lymph nodes (stage N2) had lower LIN28A expression compared to patients with N0 and N1 disease ( p = 0.01) . No statistically significant correlations were observed between LIN28A expression and 5-year OS rates ( p = 0.123), age ( p = 0.779), gender ( p = 0.538), histological subtype ( p = 0.678) and stage ( p = 0.512). 3.4. Expression of LIN28B in Lung Adenocarcinomas Positive LIN28B immunohistochemical expression was observed and evaluated in the nucleus and cytoplasm of lung adenocarcinoma cells. In adjacent non neoplastic lung tissue, epithelial cells, lymphocytes and stromal cells were negative for LIN28B . In 68/96 (70.8%) of the patients, positive LIN28B nuclear expression was observed, while no LIN28B nuclear expression was observed in 28/96 (29.2%) patients. In 78/96 patients (81.3%), positive LIN28B cytoplasmic expression was observed, while there was negative LIN28B cytoplasmic expression found in 18/96 patients (18.8%). The immunohistochemical score of the nuclear LIN28B expression ranged between 0 and 140 (median 14 ± 24) (±SD). The immunohistochemical score of the cytoplasmic LIN28B expression ranged between 0 and 210 (median 68 ± 52) (±SD). The relationships between LIN28B immunohistochemical expression and the clinicopathological data of the patients is presented in . Nuclear and cytoplasmic LIN28B expression was associated with patient stage and survival. Positive LIN28B cytoplasmic expression was related to 5-year survival in patients with lung adenocarcinoma. Patients with lower LIN28B cytoplasmic expression had a better 5-year survival ( p = 0.005) rate compared to patients with increased LIN28B expression . No associations between LIN28B cytoplasmic expression and stage ( p = 0.562), age, gender and histological subtype were observed. Increased LIN28B nuclear expression was statistically significantly associated with poor 2-year survival rates ( p = 0.021) . The association between LIN28B nuclear expression and stage revealed that patients with early stage lung adenocarcinoma (stages I and II) had statistically significantly lower nuclear expression ( p = 0.046). No statistically significant association was observed between the nuclear LIN28B expression and age, gender and histological subtype. The patients’ characteristics are summarized in . Ninety-six (96) cases were included in this study. The median age of the patients was 65.5 years (range 39–84). Sixteen patients (16.7%) had undergone pneumonectomy, 68 (70.8%) lobectomy, 9 (9.3%) double lobectomy and 3 (3.1%) wedge excision. Two- and three-year survival outcomes were available in 88 patients, and five-year survival outcomes were available in 81 patients. Positive OCT4 immunohistochemical staining was observed in the nuclei of the neoplastic cells. The epithelial cells of adjacent non neoplastic lung tissue, lymphocytes and stromal cells were negative for OCT4 . In 61/96 patients, a positive nuclear expression of OCT4 (63.5%) was noted. The immunohistochemical score of OCT4 nuclear expression ranged between 0 and 120 (mean = 4 ± 5) (±SD). The relationships between OCT4 immunohistochemical expression and the clinicopathological data of the patients is presented in . No significant correlations were observed between OCT4 expression and age ( p = 0.595), gender ( p = 0.939), histological subtype ( p = 0.673) and clinical stage ( p = 0.542). The immunohistochemical expression of OCT4 in patients with lung adenocarcinoma was associated with 2-, 3-, and 5-year OS rates. A higher nuclear expression of OCT4 was associated with improved 5-year OS rates ( p = 0.008). Patients with a higher expression of OCT4 had improved outcomes compared to patients with lower OCT4 expression levels . Positive immunohistochemical expressions of LIN28A were observed only in the nuclei of malignant epithelial cells. Epithelial cells of adjacent non neoplastic lung tissue, lymphocytes and stromal cells were negative for LIN28A . In 62/96 patients (64.5%), positive LIN28A nuclear staining was observed, while no LIN28A immunohistochemical expression was observed in 34/96 patients (13.5%) . The immunohistochemical score of the positive nuclear immunohistochemical expression ranged between 0 and 75 (median 4 ± 6) (±SD). The relationship between the LIN28A immunohistochemical expression and the clinicopathological data of the patients is presented in . The immunohistochemical expression of LIN28A was associated with tumor stage and the 5-year survival outcome in patients with lung adenocarcinoma. Patients with metastatic lymph nodes (stage N2) had lower LIN28A expression compared to patients with N0 and N1 disease ( p = 0.01) . No statistically significant correlations were observed between LIN28A expression and 5-year OS rates ( p = 0.123), age ( p = 0.779), gender ( p = 0.538), histological subtype ( p = 0.678) and stage ( p = 0.512). Positive LIN28B immunohistochemical expression was observed and evaluated in the nucleus and cytoplasm of lung adenocarcinoma cells. In adjacent non neoplastic lung tissue, epithelial cells, lymphocytes and stromal cells were negative for LIN28B . In 68/96 (70.8%) of the patients, positive LIN28B nuclear expression was observed, while no LIN28B nuclear expression was observed in 28/96 (29.2%) patients. In 78/96 patients (81.3%), positive LIN28B cytoplasmic expression was observed, while there was negative LIN28B cytoplasmic expression found in 18/96 patients (18.8%). The immunohistochemical score of the nuclear LIN28B expression ranged between 0 and 140 (median 14 ± 24) (±SD). The immunohistochemical score of the cytoplasmic LIN28B expression ranged between 0 and 210 (median 68 ± 52) (±SD). The relationships between LIN28B immunohistochemical expression and the clinicopathological data of the patients is presented in . Nuclear and cytoplasmic LIN28B expression was associated with patient stage and survival. Positive LIN28B cytoplasmic expression was related to 5-year survival in patients with lung adenocarcinoma. Patients with lower LIN28B cytoplasmic expression had a better 5-year survival ( p = 0.005) rate compared to patients with increased LIN28B expression . No associations between LIN28B cytoplasmic expression and stage ( p = 0.562), age, gender and histological subtype were observed. Increased LIN28B nuclear expression was statistically significantly associated with poor 2-year survival rates ( p = 0.021) . The association between LIN28B nuclear expression and stage revealed that patients with early stage lung adenocarcinoma (stages I and II) had statistically significantly lower nuclear expression ( p = 0.046). No statistically significant association was observed between the nuclear LIN28B expression and age, gender and histological subtype. Lung cancer is the leading cause of cancer mortality worldwide . In recent years, significant progress has been made in the discovery of molecular changes; however, the pathogenesis of the disease has not been fully clarified. In this study, we examined the role of pluripotency factor OCT4 and LIN28 (and their A and B homologs) in lung adenocarcinoma in relation to prognosis. In our study, OCT4 was overexpressed in lung adenocarcinoma, and we showed that a higher OCT4 expression was associated with improved 5-year OS rates. The latest trend in OCT4 research is in connecting OCT4 to epigenetic regulations, which are crucial in cancer development . However, results about the prognostic role of OCT4 are contradictory. In line with our findings, studies conducted on oral cancer and testicular cancer demonstrated that higher OCT4 expression was associated with better OS rates. It should be noted here that OCT4 has two isomorphs (OCT4A/B). It is possible that the isomorphs that each OCT4 antibody detects target different regions of the OCT4 protein. In contrast, in many types of cancer such as breast cancer and acute myeloid leukemia, increased OCT4 is associated with reduced overall survival rates compared to patients with low OCT4 expression . In esophageal carcinoma, increased OCT4 expression was associated with poor prognosis . In lung cancer, a meta-analysis published in 2019 highlighted that increased OCT4 expression was associated with lower overall survival and higher TNM stage . These results contradict our study, where high OCT4 expression in lung adenocarcinoma was associated with better overall survival rates. More studies need to be conducted in large cohorts of patients to elucidate the prognostic role of OCT4 in lung adenocarcinoma. We also showed that LIN28A is overexpressed in lung adenocarcinoma. However, in our cohort of lung adenocarcinoma patients no statistically significant association was found with aggressive tumor parameters and patients’ prognosis. We found that patients with metastatic lymph nodes (N2) had lower LIN28A expression compared to patients with N0 and N1 disease ( p = 0.01) which is contradictory with the current literature results. It is possible that the relatively small number of patients in our cohort is a limitation of this analysis. Several studies have revealed that stem cell markers LIN28A and LIN28B regulate gene expression, either by directly binding to messenger RNA (mRNA) or by blocking the biogenesis of Let-7 microRNAs; thus, they are implicated in cancer development . LIN28A, in combination with NANOG, OCT4 and SOX2, can reprogram human somatic cells into pluripotent stem cells. LIN28A also regulates mammalian stem cell self-renewal and promotes tissue repair . LIN28A has been found to be reactivated in ~15% of human cancers and is considered a biomarker of multiple advanced cancers. A high level of LIN28A protein and the subsequent blockage of let-7 biogenesis is associated with tumorigenesis, invasiveness and poor prognosis of malignancies such as lung cancer, liver cancer, breast cancer, gastric cancer and prostate cancer . To the best of our knowledge, the role of LIN28A in human lung adenocarcinoma tissue samples has not been investigated before in the literature. In a recent in vitro study using A549 lung adenocarcinoma cells, LIN28A was linked to MMP2/9 expression. In particular, LIN28A silencing ameliorated MMP2/9 expression levels, as well as metastases. Consequently, LIN28A serves as a marker for tumor development and invasion with potential therapeutic uses . We also observed that LIN28B was overexpressed in lung adenocarcinoma with prognostic value. Increased nuclear and cytoplasmic LIN28B expression was associated with advanced patient stage and reduced survival rates. To the best of our knowledge, there is no other study exploring the role of LIN28A/B in association with prognosis in human lung adenocarcinoma tissue samples. However, in vitro experiments have been conducted in lung cancer cell lines. Our findings agree with the current literature. LIN28B is implicated in the development of multiple tumors such as hepatocellular carcinoma. However, the mechanism of LIN28B activation in cancer remains unclear . Overexpression of LIN28A/B has been associated with poor prognosis in many cancers. In a recent meta-analysis including 3772 LIN28A-associated and 1730 LIN28B-associated cases, elevated LIN28A/B expression was significantly associated with poor prognosis in human malignancies such as gastric carcinoma , esophageal carcinoma , hepatocellular carcinoma , breast carcinoma , squamous cell carcinoma of the oral cavity and adenocarcinoma of the pancreas . A genome-wide analysis study in lung cancer revealed that the H19 gene, which is associated with tumor-cell proliferation, is involved in many types of cancer , and it causes an increase in LIN28B expression, which, in turn, promotes lung cancer . In experimental mouse models of non-small cell lung carcinoma, it was found that LIN28B overexpression significantly increased the number of tumor cells, accelerated tumor initiation and resulted in reduced overall survival rates . Also, elevated LIN28B levels have been found in 24% of lung carcinomas harboring the KRAS mutation . Another in vitro study in lung carcinoma cell lines revealed that micro-RNA miR-563 targets and represses LIN28B, thus causing a decrease in cell proliferation . These studies support the prognostic role of LIN28B, as demonstrated in our study, where patients with low nuclear expression had better 5-year survival rates. Our results also highlight LIN28 as an attractive therapeutic target in lung cancer. In conclusion, our study shows that the pluripotency factor OCT4 and LIN28 (and their homologs A and B) are implicated in lung adenocarcinoma development and progression with prognostic value. In particular, LIN28B may serve as a marker for dismal patient prognosis in lung adenocarcinoma. Further studies are needed to elucidate their role in lung adenocarcinoma and to explore their potential application as therapeutic agents. |
Herbal medicine and acupuncture relieved progressive bulbar palsy for more than 3 years: A case report | e0acdbb2-a2e2-4b78-8b21-9e2f6dfe3339 | 9666122 | Pharmacology[mh] | Motor neuron disease (MND) is characterized by the degeneration of both upper and lower motor neurons, leading to muscle weakness and eventual paralysis. The progressive neurological deterioration involves the corticospinal tract, brainstem and anterior horn cells of the spinal cord. Death generally occurs within 2 to 4 years due to respiratory failure. The most common motor neuron disease is amyotrophic lateral sclerosis (ALS). Other forms include primary lateral sclerosis, progressive muscular atrophy, and progressive bulbar palsy (PBP). The initial symptom of MND is often extremity weakness; about 70% of patients present with this “limb-onset” disease. The remaining 25% present with dysarthria and dysphagia, and about 5% of patients present with trunk or respiratory symptoms at the onset. According to a systematic analysis of the global burden of MND in 2016, the worldwide all-age prevalence was 4.5 (4.1–5.0) per 1,00,000 people, and the all-age incidence was 0.78 per 1,00,000 person-years. The pathophysiology of MND remains unknown, which limits the development of disease-modifying therapies. The only 2 approved drugs for the treatment of MND are riluzole and edaravone. Riluzole can only prolong the median survival time by approximately 2 to 3 months in patients with ALS. It is still unclear whether edaravone therapy prolongs survival in the long term. Due to the lack of effective treatment, many patients with MND in China turn to traditional Chinese medicine (TCM) treatment. Studies have verified the effectiveness of acupuncture in improving swallowing ability after stroke. It has also been reported that Chinese herbal medicine and acupuncture may be an effective treatment for MND, relieving symptoms and improving quality of life. However, there are few studies that investigate the effectiveness and safety of traditional Chinese medicine in treating dysphagia and sialorrhea in patients with MND. Here, we report a case of successfully alleviating the symptoms of dysphagia and sialorrhea for a patient with PBP with Chinese herbal medicine and acupuncture for more than 3 years. The ethics committee of Guang’anmen Hospital, China Academy of Chinese Medical Sciences approved the study. The patient has provided informed consent for publication of this case and the writing of this case followed CARE guidelines. 2.1. Clinical presentation The timeline with clinical and procedural data is shown in Figure . The patient was a 68-year-old lady, who in October 2016, presented with dysarthria, dysphagia, and sialorrhea of unknown origin. There was no tongue numbness, abnormal taste, hoarseness, dyspnea, diplopia or blurred vision, limb weakness or numbness, dizziness, or loss of consciousness. The local hospital did not give a precise diagnosis, and provided treatments for cerebral infarction. After treatment with antiplatelets, circulation improvement and reducing her lipid, the symptoms did not improve and aggressively progressed. On January 9, 2018, the patient sought medical advice at Beijing Xuanwu Hospital, National Center for Neurological Disorders of China. Physical examination showed normal function of the advanced cortex, bilateral tongue muscle atrophy, fibrillation, normal pharyngeal reflex, uvula in the middle, and negative mandibular reflex. Muscle strength and muscular tension were normal, and the reflexes of the bilateral biceps brachii, triceps brachii, and radial membrane were hyperactive. The left palmomaxillary reflex was positive, the pathological signs of both lower limbs were negative, and the water-swallowing test result was grade 2. The mini-mental state examination score was 30. Supplementary examination showed no obvious abnormality in the blood, urine, stool, or cerebrospinal fluid. She underwent head and cervical MRI. The head MRI showed a right frontal subcortical punctate ischemic focus while the cervical MRI showed C4 to C5 and C5 to C6 intervertebral disc herniation. On electromyography images, the sensory conduction velocity of the double median nerve (finger I and finger III) had slowed and the amplitude of sensory conduction of the left median nerve (finger I) had decreased. Neuropsychological examination, carotid ultrasound, and intracranial artery ultrasound showed no obvious abnormality. The patient was an elderly female with unclear onset and progressive aggravation of symptoms, mainly manifested as bulbar paralysis. The first consideration was neurodegenerative disease involving the bulbar for qualitative diagnosis, emphasizing PBP. The onset type of ALS was not excluded from the differential diagnosis. The patient had no evidence of involvement of the anterior horn of the spinal cord, such as limb muscle weakness, atrophy, or muscle fasciculation. The electromyography results were not that illuminating and the patient was diagnosed with MND/PBP. She began treatment with riluzole 50 mg twice per day to inhibit glutamate release. Mecobalamin 0.5 mg and vitamin B 1 10 mg 3 times per day were prescribed to improve nerve function. After taking riluzole for 10 months, the patient’s symptoms gradually worsened, and she stopped taking the drug. On December 7, 2018, the patient came to our traditional Chinese medicine hospital for treatment. Symptoms at the time of her first visit were as follow: dysarthria, dysphagia, and excessive saliva, which required using a handkerchief. She had atrophy and fibrillation of tongue muscle, weakness of limbs, feeling of limb muscle fasciculation, and no apparent atrophy of limb muscles. She reported poor sleep quality, a weight loss of 5 kg due to poor nutrition in the past 6 months, constipation, normal urination, pink tongue and greasy coating, and deep and slow pulse. 2.2. Interventional procedure According to TCM theory, we determined that she had flaccid disease and the syndrome of deficiency of spleen and kidney yang. The herbal medicine prescription was formulated to strengthen the spleen and kidney, supplement qi, and warm yang. Huangqi Shengji decoction was the main prescription of Chinese herbal medicine. The specific medication and dosage were as follows: milkvetch root 80 g, cassia twig 15 g, Chinese angelica 30 g, prepared rehmannia root 10 g, debark peony root 30 g, Sichuan lovage rhizome 10 g, suberect spatholobus stem 30 g, prepared common monkshood branched root 30 g, ephedra 15 g, alum processed pinellia 15 g, thunberbg fritillaria bulb 10 g, loquat leaf 30 g, inula flower 20 g, 2-toothed achyranthes root 15 g, manchurian wild ginger 10 g, spine date seed 30 g, tuber fleeceflower stem 30 g, liquorice root 10 g, golden thread 15 g, snakegourd fruit 30 g, largehead atractylodes rhizome 30 g, deer-horn glue 6 g, human placenta 3 g, nux vomica 0.3 g. The herbal medicines were decocted with water, about 200 ml each time, twice a day. For acupuncture treatment, unilateral Lianquan (CV23) and Zhiqiang (Extra, between hyoid and the upper notch of thyroid cartilage), bilateral Fengchi (GB20), Hegu (LI4), Toulinqi (GB15), Shenting (GV24), Baihui (GV20), Quchi (LI11), Gongxue (Extra, 40 mm below Fengchi), Tunyan (Extra, between hyoid and prominentia laryngea, 5 mm next to the anterior median line), Fayin (Extra, 5 mm next to the median line under prominentia laryngea, between the thyroid cartilage and cricoid cartilage) and Wai Jinjin Yuye (Extra, 25 mm next to Lianqua, the left side is Wai Jinjin and the right side is Wai Yuye) were chosen. The positions of the acupoints are shown in Figure . We inserted 0.30 × 25 mm stainless steel, single-use, sterile needles 3 to 5 mm vertically at Tunyan, Fayin and Zhiqiang and 0.30 × 40 mm needles 30 to 35 mm toward the root of the tongue at Wai Jinjin Yuye and CV23. After insertion, gentle and even manipulations involving twirling and rotsating at a frequency of 180/minute were performed to attain deqi (a sensation of soreness, aching, swelling, numbness, or heaviness) at these acupoints. After twirling for 15 seconds, the needles were pulled out. 0.30 × 40 mm needles were inserted to a depth of 20 to 30 mm at Gongxue, GB20, LI4, GB15, GV24, GV20, and LI11 and were kept for 30 minutes. The GB20 was connected with electroacupuncture. A continuous wave was given; the frequency was 2 Hz, and the current intensity was 2 mA. The treatment frequency was once every other day, 3 times a week. 2.3. Follow-up and patient perspective On January 18, 2019, 1 month after the combined acupuncture and herbal medicine treatment, the patient’s saliva decreased slightly, fatigue improved, and appetite increased. The other symptoms were the same as before. We slightly adjusted the herbal medicine prescription and continued the acupuncture treatment. On March 12, 2019, after 3 months of treatment, saliva decreased significantly, and the frequency of limb muscle fasciculation decreased. On April 2, 2019, after 4 months of treatment, she had less saliva, dysphagia was relieved considerably, and she had no problem eating. The strength of limbs was enhanced, and sleep condition was also improved. The change in amyotrophic lateral sclerosis functional rating scale revised score in the first 6 months of treatment is depicted in Figure . In a later follow-up, the patient’s symptoms were stable. Her herbal medicine prescription was changed slightly according to the syndromes once or twice a month, and acupuncture treatment was performed two to 3 times a week. During treatment, no abnormality was found in liver and kidney function testing. The patient’s condition has been stable for more than 3 years and continues to be treated with Chinese herbal medicine and acupuncture in our clinic. The timeline with clinical and procedural data is shown in Figure . The patient was a 68-year-old lady, who in October 2016, presented with dysarthria, dysphagia, and sialorrhea of unknown origin. There was no tongue numbness, abnormal taste, hoarseness, dyspnea, diplopia or blurred vision, limb weakness or numbness, dizziness, or loss of consciousness. The local hospital did not give a precise diagnosis, and provided treatments for cerebral infarction. After treatment with antiplatelets, circulation improvement and reducing her lipid, the symptoms did not improve and aggressively progressed. On January 9, 2018, the patient sought medical advice at Beijing Xuanwu Hospital, National Center for Neurological Disorders of China. Physical examination showed normal function of the advanced cortex, bilateral tongue muscle atrophy, fibrillation, normal pharyngeal reflex, uvula in the middle, and negative mandibular reflex. Muscle strength and muscular tension were normal, and the reflexes of the bilateral biceps brachii, triceps brachii, and radial membrane were hyperactive. The left palmomaxillary reflex was positive, the pathological signs of both lower limbs were negative, and the water-swallowing test result was grade 2. The mini-mental state examination score was 30. Supplementary examination showed no obvious abnormality in the blood, urine, stool, or cerebrospinal fluid. She underwent head and cervical MRI. The head MRI showed a right frontal subcortical punctate ischemic focus while the cervical MRI showed C4 to C5 and C5 to C6 intervertebral disc herniation. On electromyography images, the sensory conduction velocity of the double median nerve (finger I and finger III) had slowed and the amplitude of sensory conduction of the left median nerve (finger I) had decreased. Neuropsychological examination, carotid ultrasound, and intracranial artery ultrasound showed no obvious abnormality. The patient was an elderly female with unclear onset and progressive aggravation of symptoms, mainly manifested as bulbar paralysis. The first consideration was neurodegenerative disease involving the bulbar for qualitative diagnosis, emphasizing PBP. The onset type of ALS was not excluded from the differential diagnosis. The patient had no evidence of involvement of the anterior horn of the spinal cord, such as limb muscle weakness, atrophy, or muscle fasciculation. The electromyography results were not that illuminating and the patient was diagnosed with MND/PBP. She began treatment with riluzole 50 mg twice per day to inhibit glutamate release. Mecobalamin 0.5 mg and vitamin B 1 10 mg 3 times per day were prescribed to improve nerve function. After taking riluzole for 10 months, the patient’s symptoms gradually worsened, and she stopped taking the drug. On December 7, 2018, the patient came to our traditional Chinese medicine hospital for treatment. Symptoms at the time of her first visit were as follow: dysarthria, dysphagia, and excessive saliva, which required using a handkerchief. She had atrophy and fibrillation of tongue muscle, weakness of limbs, feeling of limb muscle fasciculation, and no apparent atrophy of limb muscles. She reported poor sleep quality, a weight loss of 5 kg due to poor nutrition in the past 6 months, constipation, normal urination, pink tongue and greasy coating, and deep and slow pulse. According to TCM theory, we determined that she had flaccid disease and the syndrome of deficiency of spleen and kidney yang. The herbal medicine prescription was formulated to strengthen the spleen and kidney, supplement qi, and warm yang. Huangqi Shengji decoction was the main prescription of Chinese herbal medicine. The specific medication and dosage were as follows: milkvetch root 80 g, cassia twig 15 g, Chinese angelica 30 g, prepared rehmannia root 10 g, debark peony root 30 g, Sichuan lovage rhizome 10 g, suberect spatholobus stem 30 g, prepared common monkshood branched root 30 g, ephedra 15 g, alum processed pinellia 15 g, thunberbg fritillaria bulb 10 g, loquat leaf 30 g, inula flower 20 g, 2-toothed achyranthes root 15 g, manchurian wild ginger 10 g, spine date seed 30 g, tuber fleeceflower stem 30 g, liquorice root 10 g, golden thread 15 g, snakegourd fruit 30 g, largehead atractylodes rhizome 30 g, deer-horn glue 6 g, human placenta 3 g, nux vomica 0.3 g. The herbal medicines were decocted with water, about 200 ml each time, twice a day. For acupuncture treatment, unilateral Lianquan (CV23) and Zhiqiang (Extra, between hyoid and the upper notch of thyroid cartilage), bilateral Fengchi (GB20), Hegu (LI4), Toulinqi (GB15), Shenting (GV24), Baihui (GV20), Quchi (LI11), Gongxue (Extra, 40 mm below Fengchi), Tunyan (Extra, between hyoid and prominentia laryngea, 5 mm next to the anterior median line), Fayin (Extra, 5 mm next to the median line under prominentia laryngea, between the thyroid cartilage and cricoid cartilage) and Wai Jinjin Yuye (Extra, 25 mm next to Lianqua, the left side is Wai Jinjin and the right side is Wai Yuye) were chosen. The positions of the acupoints are shown in Figure . We inserted 0.30 × 25 mm stainless steel, single-use, sterile needles 3 to 5 mm vertically at Tunyan, Fayin and Zhiqiang and 0.30 × 40 mm needles 30 to 35 mm toward the root of the tongue at Wai Jinjin Yuye and CV23. After insertion, gentle and even manipulations involving twirling and rotsating at a frequency of 180/minute were performed to attain deqi (a sensation of soreness, aching, swelling, numbness, or heaviness) at these acupoints. After twirling for 15 seconds, the needles were pulled out. 0.30 × 40 mm needles were inserted to a depth of 20 to 30 mm at Gongxue, GB20, LI4, GB15, GV24, GV20, and LI11 and were kept for 30 minutes. The GB20 was connected with electroacupuncture. A continuous wave was given; the frequency was 2 Hz, and the current intensity was 2 mA. The treatment frequency was once every other day, 3 times a week. On January 18, 2019, 1 month after the combined acupuncture and herbal medicine treatment, the patient’s saliva decreased slightly, fatigue improved, and appetite increased. The other symptoms were the same as before. We slightly adjusted the herbal medicine prescription and continued the acupuncture treatment. On March 12, 2019, after 3 months of treatment, saliva decreased significantly, and the frequency of limb muscle fasciculation decreased. On April 2, 2019, after 4 months of treatment, she had less saliva, dysphagia was relieved considerably, and she had no problem eating. The strength of limbs was enhanced, and sleep condition was also improved. The change in amyotrophic lateral sclerosis functional rating scale revised score in the first 6 months of treatment is depicted in Figure . In a later follow-up, the patient’s symptoms were stable. Her herbal medicine prescription was changed slightly according to the syndromes once or twice a month, and acupuncture treatment was performed two to 3 times a week. During treatment, no abnormality was found in liver and kidney function testing. The patient’s condition has been stable for more than 3 years and continues to be treated with Chinese herbal medicine and acupuncture in our clinic. PBP is a form of MND, which is less common than ALS. Among patients with PBP, 87% progress to definite ALS. Epidemiological statistics show that PBP accounts for 4.1% of MND in China. The onset age of PBP is generally late, mainly after 40 or 50 years of age. The main symptoms include dysarthria, dysphagia, tongue muscle atrophy, and fasciculations. This type of disease is generally severe and develops rapidly, and most patients die of respiratory muscle paralysis or lung infection within 1 to 2 years. The mechanisms underlying neurodegeneration in MND remain incompletely understood. Many cellular and molecular processes have been implicated, including toxic protein aggregation, mitochondrial dysfunction, excitotoxicity, hypermetabolism, oxidative stress, and inflammation. The only 2 approved drugs for the treatment of MND are riluzole and edaravone. A 50 mg dose, twice a day for 18 months, of riluzole, a glutamatergic neurotransmission inhibitor, can delay the course of the disease and prolong the median survival time by about 2 to 3 months in patients with ALS. Edaravone, a free-radical scavenger of peroxyl radicals, showed efficacy in a small subset of people with MND in maintaining function and quality of life in the early stage. It is still unclear whether edaravone therapy prolongs survival in the long term. Although promising outcomes were obtained in preclinical studies, numerous compounds investigated failed in human clinical trials, and there is no available treatment to stop or reverse the progressive course of MND. Symptomatic treatments include the treatment of cramps, pain, spasticity, noninvasive ventilation for supporting respiratory function, and enteral tube feeding to support nutrition deficiencies. The main complaint of our patient was progressive dysphagia and sialorrhea. For dysphagia, feeding tube placement and percutaneous endoscopic gastrostomy (PEG) may be necessary if the patient has poor nutrition and loses weight. For sialorrhea, botulinum toxin type-B injections to parotid and submandibular glands are mostly effective in the short term (up to 4 weeks). However, there is probably no benefit beyond this time after a single injection. Anticholinergic medications (amitriptyline and glycopyrronium bromide) are often used for treating sialorrhea, but there is not enough evidence proving the efficacy of these drugs in MND. The patient turned to TCM for symptom relief. After 4 months of herbal medicine combined with acupuncture treatment, the dysphagia and sialorrhea were significantly reduced, and her quality of life improved markedly. She avoided a PEG and feeding tube insertion. For this patient, almost 6 years have passed since the onset of symptoms, and the treatment has been maintained for more than 3 years. The disease has not deteriorated or progressed to ALS, and a relatively good treatment effect has been achieved. Under TCM theory, we believe that the patient belongs to the syndrome of deficiency of spleen and kidney yang. Therefore, various herbal medicines are used to tonify the spleen and kidney, warm yang, and replenish qi. TCM is a complementary and alternative treatment for MND, especially in China, and includes herbal medicine, massage, acupuncture, moxibustion, and other methods, among which herbal medicine and acupuncture are most commonly used. To alleviate the symptoms, 90% of patients take Chinese herbal production in Shanghai, China. In animal models, TCM improved motor function and extended survival duration by inhibiting inflammation. Herbal medicines have been testified to prolong survival duration and relieve symptoms for patients with MND in some case reports and clinical studies. However, the credibility of these findings is limited by the non-RCT design, unverified outcome measures, a small sample size, or short follow-up. Thus, these reports cannot provide evidence-based support for the clinical use of TCM in the treatment of MND. A prospective registry study has been conducted in China to clarify whether TCM is an appropriate therapy for patients with MND (CARE-TCM). This study will help identify common diagnostic approaches and treatment modalities among Chinese patients with ALS receiving TCM treatment, enabling the establishment of strategies for treatment based on evidence-based medicine. To promote the application of herbal medicine as an alternative therapy in the treatment of MND, animal experiments that explain the pharmacology and toxicology and large-scale and rigorously designed high-quality clinical studies should be performed. Acupuncture is a type of complementary and alternative medicine that has been widely used in China, Korea, and Japan for centuries. Experiments on animals suggest that electroacupuncture treatment can help increase anti-inflammation activity in the central nervous system and respiratory system of animals with MND. Unfortunately, the number of clinical studies on acupuncture for the treatment of MND is minimal. In our case, we used neck acupuncture, which is often used for aphasia or dysphagia caused by bulbar paralysis after stroke, to solve the problem of dysphagia and sialorrhea. Its effectiveness and safety have been confirmed by many clinical studies. Commonly used acupoints include Lianquan, Wai Jinjin Yuye, Renying, Tiantu, and Fengchi. We do acknowledge that it is hard to determine whether the combined treatment is superior to herbal medicine or acupuncture, when used alone, in alleviating symptoms and improving quality of life of patients with MND/PBP. When herbal medicine and acupuncture are used simultaneously, the efficacy of the 2 therapies cannot be distinguished, either. In TCM hospitals, due to the rapid progression of the disease and difficulty in treatment, patients with MND are managed with multiple TCM therapies. Each therapy has its own indications and limitations. Acupuncture is good at improving dysphagia while herbal medicine is good at improving some of the associated symptoms. For acupuncture treatment, patients have to come to the hospital 3 times a week, whereas herbal medicine can be taken at home to ensure the continuity of treatment when patients are unable to visit the hospital. Our case report suggests that acupuncture combined with Huangqi Shengji decoction may alleviate dysphagia and salivation in patients with PBP. When faced with patients with MND/PBP with dysphagia and salivation symptoms in clinical practice, TCM doctors or acupuncturists can use this combined treatment and observe the therapeutic effect. Due to the low incidence rate and the complexity of TCM interventions, it is difficult to conduct standardized randomized controlled trials to investigate the efficacy and safety of this combined treatment. However, case-control, prospective cohort, or observational studies can be conducted to observe the therapeutic effects. After the preliminary evaluation of the efficacy of the combined treatment, we can proceed to interventional studies. This paper reports a case of Chinese herbal medicine combined with acupuncture in the treatment of MND/PBP that successfully alleviated dysphagia and sialorrhea. Our report suggests that alternative therapies, such as herbal medicine and acupuncture, may effectively reduce the symptoms of MND/PBP. However, standardized clinical studies are still required to verify the effectiveness and safety of this treatment. Conceptualization: Yajing Yang, Jinxia Ni. Resources: Weiqian Chang, Yukun Tian. Visualization: Shaohong Li. Writing – original draft: Wenzeng Zhu. Writing – review & editing: Siyang Peng. |
Telemedicine in ophthalmology - where are we and where are we going?
| 3bc0a831-200f-4dab-b5f1-e19c0a10a27f | 10591424 | Ophthalmology[mh] | |
Proteome‐Wide Association Study for Finding Druggable Targets in Progression and Onset of Parkinson's Disease | f8d99eec-1269-4637-80ac-a4603120cb0f | 11862824 | Biochemistry[mh] | Introduction Parkinson's disease (PD) is a neurodegenerative disorder characterized by the progressive loss of dopaminergic neurons in the substantia nigra pars compacta and the accumulation of α‐synuclein aggregates, known as Lewy bodies. It is the second most prevalent neurodegenerative disease after Alzheimer's disease . Epidemiological studies indicate a global increase in PD cases, rising from 2.5 million to 6.1 million over the past three decades . With the aging global population, the incidence of PD is projected to escalate significantly, imposing substantial socioeconomic burdens on patients and healthcare systems . PD manifests through a spectrum of motor symptoms, including tremors, rigidity, bradykinesia, and postural instability, resulting from the degeneration of dopaminergic neurons . In addition to these motor deficits, PD encompasses a range of non‐motor symptoms such as cognitive decline, mood disorders, and autonomic dysfunction, which contribute to the disease's complexity and severely impact the quality of life of affected individuals . The heterogeneity in disease progression, characterized by varying rates of motor and cognitive deterioration among patients, presents significant challenges for effective treatment and management strategies. Current therapeutic approaches for PD primarily aim at symptomatic relief, employing medications like levodopa and dopamine agonists to replenish dopamine levels and alleviate motor symptoms . While these treatments can provide temporary improvement, they do not halt the underlying neurodegenerative processes driving the disease . The absence of disease‐modifying therapies underscores the urgent need for interventions that can influence both the onset and progression of PD. Advancements in proteomics and genomics have opened new avenues for identifying biomarkers and therapeutic targets in complex diseases such as PD. Proteome‐wide association studies (PWAS), leveraging protein quantitative trait loci (pQTL) data, facilitate the identification of protein‐level associations with disease phenotypes . Specifically, plasma and brain proteomics offer valuable insights into systemic and central nervous system‐specific protein alterations linked to PD . Integrative genomic analyses that combine genome‐wide association studies (GWAS) with proteomic data enable the elucidation of causal relationships between genetic variants, protein expression, and disease traits . Methodologies such as PWAS, summary‐based Mendelian randomization (SMR) , colocalization analyses , and phenome‐wide MR (PheW‐MR) are instrumental in dissecting the genetic architecture of PD and identifying proteins that may serve as potential therapeutic targets. Therefore, by integrating these powerful and steady approaches in a logical order, our study aims to identify latent but reliable drug targets for PD. Few studies that explored PD's targets focused on the developing procedure of this neurodegenerative disease, while the primary objective of our study is to identify and validate potential therapeutic targets for both the onset and progression of PD through integrative proteomic and genetic analyses, providing novel perspectives on the dynamic changes associated with PD. By harnessing large‐scale plasma and brain pQTL datasets from the deCODE Health study and the Religious Orders Study/Rush Memory and Aging Project (ROS/MAP), respectively, we conducted comprehensive PWAS to uncover proteins associated with various PD phenotypes, including PD onset and three distinct progression phenotypes: composite, motor, and cognitive. These PD phenotypes were selected to comprehensively capture the entire disease trajectory. Subsequent sensitivity analyses, including SMR and colocalization, were employed to confirm the causal relevance of these proteins and PD phenotypes. Additionally, a reverse MR analysis was performed to explore potential bidirectional causal relationships between proteins and PD. The identification of causal proteins may highlight candidate drug targets for PD treatment. Furthermore, PheW‐MR analyses were conducted to assess potential side effects of targeting these candidate proteins, thereby informing the safety and efficacy of prospective therapeutic interventions for PD. Given that PD primarily affects the brain, it is essential to understand the cellular distribution of the genes encoding candidate drug target proteins across various brain regions to develop effective therapies. To achieve this, we utilized gene expression data from the ABA to perform cluster analysis, refining the distribution of candidate targets within the brain and identifying co‐expression patterns in specific cell populations. Moreover, we employed protein–protein interaction (PPI) networks to investigate interactions between candidate proteins across multiple PD phenotypes, thereby elucidating functional relationships and exploring the potential for multi‐target drug development. Then, we integrated drug target information from the DrugBank database to explore opportunities for drug repurposing of candidate targets . Figure shows the research workflow of this study. Taken together, the identification of causal protein targets not only enhances our understanding of PD pathogenesis but also paves the way for the development of disease‐modifying therapies and personalized medicine approaches aimed at improving patient outcomes. Method 2.1 Data Sources We obtained plasma pQTL data from the deCODE Health study, which performed comprehensive proteomic profiling in plasma samples from 35,559 Icelandic participants using the SomaScan platform, ultimately quantifying 4907 distinct plasma proteins . For brain‐derived protein data, we used pQTL information on 1097 proteins measured in the dorsolateral prefrontal cortex from participants in the ROS/MAP using mass spectrometry . We also incorporated GWAS summary statistics for three PD progression phenotypes, including composite (2755 patients), motor (2848 patients), and cognitive (2788 patients), as reported by Tan MMX et al. For PD onset, the discovery cohort consisted of GWAS summary statistics derived from Nalls MA et al. (15,056 cases and 12,637 controls), and the replication cohort employed data from the FinnGen consortium (4235 cases and 373,042 controls). Details of these datasets are provided in Table . 2.2 PWAS We conducted PWAS on both brain and whole blood tissues to identify protein‐level associations with PD phenotypes. For brain tissue, we utilized the Functional Summary‐based Imputation (FUSION) framework, which employs existing pQTL weights specifically tailored to brain proteomes . FUSION is a well‐established computational tool that imputes genetically regulated gene expression and assesses gene‐level associations with complex traits and diseases. By leveraging pretrained pQTL weights for brain tissue, we integrated PD‐related phenotypes and performed PWAS using FUSION on a Linux platform . In contrast, appropriate pretrained PWAS weights for whole blood were unavailable. To overcome this limitation, we employed the Omnibus Transcriptome Test using Expression Reference Summary data (OTTERS), a specialized framework designed to generate and utilize pQTL weights from summary‐level data . OTTERS operates in two primary stages. In Stage I, we constructed genetically regulated expression (GReX) imputation models by deriving cis‐pQTL weights, defined as the regions extending 1 MB upstream and downstream of the protein‐coding genes, from summary‐level cis‐pQTL data and external European linkage disequilibrium (LD) reference panels from the 1000 Genomes Project. Multiple methodologies were employed for weight derivation, including P+T ( p ‐value thresholding with LD clumping) , lassosum (a frequentist LASSO‐based approach) , SDPR (a nonparametric Bayesian Dirichlet Process Regression model) , and PRS‐CS (a Bayesian multivariable regression model utilizing continuous shrinkage priors) . In Stage II, these cis‐pQTL weights were used to estimate GReX for each gene, enabling gene‐level association tests within the GWAS dataset. PWAS p ‐values derived from each modeling approach were subsequently integrated into a single composite metric using the Aggregated Cauchy Association Test (ACAT‐O) . We refer to the resultant p ‐values from this integrated test as OTTERS p ‐values. For our analyses, we incorporated plasma pQTL data from the deCODE Health Study and brain pQTL data from the Religious Orders Study and the Rush Memory and Aging Project (ROS/MAP). We applied the Benjamini–Hochberg (BH) method to correct p ‐values and control the false discovery rate (FDR), thereby minimizing false positives without excessively inflating false negatives. In the PWAS, proteins with FDR‐corrected p ‐values below 0.05 were considered significantly associated with the corresponding PD phenotype. Specifically, for proteins associated with PD onset, those that reached significance in the discovery cohort and maintained p < 0.05 in the replication cohort were deemed successfully replicated and selected for subsequent analyses. 2.3 Sensitivity Analyses 2.3.1 SMR Analysis To rigorously validate our PWAS findings, we employed SMR to confirm both brain and plasma proteins found to be causally associated with PD‐related phenotypes. SMR integrates pQTL and GWAS summary statistics within the MR framework, which utilizes instrumental variables (IVs), genetic variants that serve as proxies for protein levels, to enable the assessment of the causal impact of protein levels on PD‐related traits . SMR is an extension of MR, and MR adheres to three core assumptions: (i) the relevance assumption, which requires a strong association between IVs and the exposure; (ii) the independence assumption, stating that IVs influence the outcome solely through the exposure; (iii) the exclusion restriction assumption, which dictates that IVs should not have a direct impact on the outcome. Unlike conventional two‐sample MR, where two independent GWAS datasets are required to estimate the causal effect between traits, SMR combines pQTL and GWAS data and utilizes the Heterogeneity in Dependent Instruments (HEIDI) test . This approach offers more robust discrimination between pleiotropic and linkage effects, reduces potential biases due to LD, and lowers the large sample size requirements often seen in standard MR methods . We adopted the SMR‐derived estimates as our primary measures of each protein's influence on PD‐related phenotypes. Given the inherent stringency of the SMR method, we applied the Benjamini–Hochberg procedure to control the FDR, thereby minimizing false positives without excessively inflating false negatives. Any protein that met the criteria of an FDR‐adjusted SMR p < 0.05 and a HEIDI p > 0.01 was considered to have a causal relationship with the respective PD‐related phenotype . The threshold for the p ‐value of the IVs was 5e‐08 when running SMR. To ensure the robustness of our IVs, we calculated F ‐statistics using the established formula : F = r 2 N − 2 1 − r 2 where N is the sample size and r 2 is the proportion of variance in the exposure explained by the IV. An F ‐statistic greater than 10 is commonly regarded as indicative of sufficient IV strength, thus mitigating weak instrument bias. All calculated F ‐values are presented in Table . 2.3.2 Colocalization Analysis To determine whether the observed associations between proteins and PD‐related phenotypes stemmed from a shared causal variant rather than LD, we conducted Bayesian colocalization analyses using the coloc R package . This methodology integrated both brain and plasma pQTL data with GWAS summary statistics for PD‐related traits. We evaluated five distinct hypotheses: (i) H0: no causal variant influences either the protein or PD‐related phenotypes; (ii) H1: a causal variant affects only the protein; (iii) H2: a causal variant affects only the PD phenotype; (iv) H3: distinct causal variants influence the protein and PD phenotypes independently; and (v) H4: a single causal variant affects both. For each protein, we included single nucleotide polymorphisms (SNPs) within a ± 500 kb window surrounding its pQTL region. In instances where a protein was associated with multiple pQTLs, each pQTL was analyzed separately, prioritizing those with the strongest evidence of association. A posterior probability (PP) greater than 0.8 for hypothesis H4 was considered strong evidence supporting the existence of a shared causal variant. Overall, the prioritized proteins, which were significantly identified in PWAS and had successfully passed SMR, HEIDI, and colocalization assessments, might have the potential to be the candidate targets for PD treatment. 2.3.3 Reverse MR Analysis Complementing our primary SMR and colocalization analyses, we implemented a reverse MR approach to investigate potential bidirectional causal relationships among candidate targets . In this analysis, GWAS data for PD‐related phenotypes were designated as exposures, while proteins that satisfied the PWAS, SMR, HEIDI, and colocalization thresholds from both whole‐blood and brain pQTL datasets were treated as outcomes. To ensure an adequate number of IVs for each PD‐related phenotype, we adopted a relaxed significance threshold of p < 5e‐06 and performed LD clumping to maintain LD independence ( r 2 < 0.001, window size = 10,000 kb) among the SNPs. Subsequently, we calculated F ‐statistics for each IV to assess their strength, excluding those with F ‐values < 10 to mitigate weak instrument bias. Following this, the Steiger test was conducted to verify the directionality of the associations, retaining only SNPs that explained a larger proportion of variance in the exposure compared to the outcome (Table ). This step ensured that each IV primarily influenced the outcome through its effect on the exposure. Finally, we applied a Bonferroni‐corrected significance threshold of p < (0.05/ n ), where n denotes the total number of the tested associations, with associations surpassing this threshold considered statistically significant, thereby revealing potential bidirectional causalities between PD and the candidates. 2.4 PheW ‐ MR Analysis of 679 Disease Traits To evaluate the potential unintended consequences of targeting proteins implicated in PD‐related phenotypes, we conducted a PheW‐MR analysis encompassing 679 distinct disease traits. Initially, we established causal associations between our prioritized proteins and these 679 common disease traits using PheW‐MR. Subsequently, we integrated these results with the SMR findings for PD‐related phenotypes to ensure that potential side effects were not confounded by directional biases. This comprehensive approach enabled the identification of unintended consequences associated with targeting specific proteins as therapeutic interventions. For this analysis, protein–disease associations were derived from PheW‐MR evaluations across a broad spectrum of 679 diseases, each comprising more than 500 cases, as previously described by Zhou et al. These phenotypes were sourced from the UK Biobank ( N ≤ 408,961) and categorized using PheCodes. To determine the effect sizes of proteins on the 679 diseases, we performed MR. In this process, IVs were selected using a stringent significance threshold of p < 5e‐08, followed by LD clumping ( r 2 < 0.1, window size = 10,000 kb) to ensure LD independence among the selected SNPs. The effects of proteins on PD‐related phenotypes were obtained from SMR analyses linking the candidate proteins to PD‐related traits. We defined a side effect as any influence on an alternate disease trait resulting from manipulating a target protein to achieve a 20% reduction in the risk of the PD‐related phenotype. To estimate and standardize the magnitude of side effects, we adopted the following formula for the odds ratio of the effect (OR effect ) : OR effect = exp β effect where β effect = β 679 diseases β PD phenotypes × ln 0.8 Here, β 679 diseases represents the effect of the candidate proteins on the 679 diseases, with only associations having p ‐values below 0.05 included. β PD phenotypes denotes the proteins' effect on PD‐related phenotypes, derived from the PheW‐MR and SMR analyses. Proteins with OR values greater than 1 were considered to have potentially adverse side effects, whereas those with OR values less than 1 were deemed to confer beneficial side effects. p ‐values were estimated using a bootstrap method with 1 million iterations ( n = 1,000,000) and the p ‐values for the side effects were corrected using the Bonferroni method. A side effect was considered statistically significant if its p ‐value was below (0.05/ k ). Here, k represents the total number of associations between proteins and diseases that had a p < 0.05 in the PheW‐MR analysis. The associations achieving Bonferroni‐corrected significance threshold of p ‐values < (0.05/ k ), where k is the total number of protein–disease associations identified at p < 0.05, were regarded as statistically significant . 2.5 Cellular Distribution‐Based Clustering of Candidate Targets Using ABA Data Given that PD predominantly affects the brain, it is imperative to elucidate the cellular distribution of genes encoding candidate target proteins across various brain regions to develop effective therapies. To refine the spatial distribution of these targets and identify co‐expression patterns within specific cell populations, we conducted cluster analysis using gene expression data from the ABA . Specifically, we utilized the Whole Human Brain 10x RNA‐seq dataset (data updated on March 30, 2024) and extracted log₂‐normalized expression matrices corresponding to our prioritized protein‐coding genes. Cluster analysis was performed based on the similarity of gene expression levels across different cell types. Hierarchical clustering was executed using the Unweighted Pair Group Method with Arithmetic Mean (UPGMA) to identify patterns of co‐expression and potential functional relationships among the genes. 2.6 PPI Network and Druggability Assessment To identify synergistic interactions among targets across multiple phenotypes and facilitate the development of multi‐target therapeutics, we constructed a PPI network encompassing candidate targets associated with various PD phenotypes. We utilized the Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) database (version 12.0; http://string‐db.org ) to identify interactions among proteins implicated in both the onset and progression of PD, as determined in our preceding analyses. An interaction score threshold of ≥ 0.4 was applied to ensure a moderate level of confidence in the identified interactions. The resulting PPI network was subsequently visualized using Cytoscape (version 3.6.1; https://cytoscape.org ). For clarity, any nodes not connected to the main PPI network were excluded from the final visualization. To evaluate the feasibility of drug repurposing, we conducted a druggability assessment using real‐world drug target data from databases such as DrugBank. This assessment enabled us to identify overlaps between our identified proteins and established drug targets, as well as to explore their associated therapeutic indications. By leveraging preprocessed data from Ruiz et al. , we facilitated the evaluation of potential repurposing opportunities, thereby enhancing the clinical relevance of our candidate proteins for the treatment of Parkinson's disease. Data Sources We obtained plasma pQTL data from the deCODE Health study, which performed comprehensive proteomic profiling in plasma samples from 35,559 Icelandic participants using the SomaScan platform, ultimately quantifying 4907 distinct plasma proteins . For brain‐derived protein data, we used pQTL information on 1097 proteins measured in the dorsolateral prefrontal cortex from participants in the ROS/MAP using mass spectrometry . We also incorporated GWAS summary statistics for three PD progression phenotypes, including composite (2755 patients), motor (2848 patients), and cognitive (2788 patients), as reported by Tan MMX et al. For PD onset, the discovery cohort consisted of GWAS summary statistics derived from Nalls MA et al. (15,056 cases and 12,637 controls), and the replication cohort employed data from the FinnGen consortium (4235 cases and 373,042 controls). Details of these datasets are provided in Table . PWAS We conducted PWAS on both brain and whole blood tissues to identify protein‐level associations with PD phenotypes. For brain tissue, we utilized the Functional Summary‐based Imputation (FUSION) framework, which employs existing pQTL weights specifically tailored to brain proteomes . FUSION is a well‐established computational tool that imputes genetically regulated gene expression and assesses gene‐level associations with complex traits and diseases. By leveraging pretrained pQTL weights for brain tissue, we integrated PD‐related phenotypes and performed PWAS using FUSION on a Linux platform . In contrast, appropriate pretrained PWAS weights for whole blood were unavailable. To overcome this limitation, we employed the Omnibus Transcriptome Test using Expression Reference Summary data (OTTERS), a specialized framework designed to generate and utilize pQTL weights from summary‐level data . OTTERS operates in two primary stages. In Stage I, we constructed genetically regulated expression (GReX) imputation models by deriving cis‐pQTL weights, defined as the regions extending 1 MB upstream and downstream of the protein‐coding genes, from summary‐level cis‐pQTL data and external European linkage disequilibrium (LD) reference panels from the 1000 Genomes Project. Multiple methodologies were employed for weight derivation, including P+T ( p ‐value thresholding with LD clumping) , lassosum (a frequentist LASSO‐based approach) , SDPR (a nonparametric Bayesian Dirichlet Process Regression model) , and PRS‐CS (a Bayesian multivariable regression model utilizing continuous shrinkage priors) . In Stage II, these cis‐pQTL weights were used to estimate GReX for each gene, enabling gene‐level association tests within the GWAS dataset. PWAS p ‐values derived from each modeling approach were subsequently integrated into a single composite metric using the Aggregated Cauchy Association Test (ACAT‐O) . We refer to the resultant p ‐values from this integrated test as OTTERS p ‐values. For our analyses, we incorporated plasma pQTL data from the deCODE Health Study and brain pQTL data from the Religious Orders Study and the Rush Memory and Aging Project (ROS/MAP). We applied the Benjamini–Hochberg (BH) method to correct p ‐values and control the false discovery rate (FDR), thereby minimizing false positives without excessively inflating false negatives. In the PWAS, proteins with FDR‐corrected p ‐values below 0.05 were considered significantly associated with the corresponding PD phenotype. Specifically, for proteins associated with PD onset, those that reached significance in the discovery cohort and maintained p < 0.05 in the replication cohort were deemed successfully replicated and selected for subsequent analyses. Sensitivity Analyses 2.3.1 SMR Analysis To rigorously validate our PWAS findings, we employed SMR to confirm both brain and plasma proteins found to be causally associated with PD‐related phenotypes. SMR integrates pQTL and GWAS summary statistics within the MR framework, which utilizes instrumental variables (IVs), genetic variants that serve as proxies for protein levels, to enable the assessment of the causal impact of protein levels on PD‐related traits . SMR is an extension of MR, and MR adheres to three core assumptions: (i) the relevance assumption, which requires a strong association between IVs and the exposure; (ii) the independence assumption, stating that IVs influence the outcome solely through the exposure; (iii) the exclusion restriction assumption, which dictates that IVs should not have a direct impact on the outcome. Unlike conventional two‐sample MR, where two independent GWAS datasets are required to estimate the causal effect between traits, SMR combines pQTL and GWAS data and utilizes the Heterogeneity in Dependent Instruments (HEIDI) test . This approach offers more robust discrimination between pleiotropic and linkage effects, reduces potential biases due to LD, and lowers the large sample size requirements often seen in standard MR methods . We adopted the SMR‐derived estimates as our primary measures of each protein's influence on PD‐related phenotypes. Given the inherent stringency of the SMR method, we applied the Benjamini–Hochberg procedure to control the FDR, thereby minimizing false positives without excessively inflating false negatives. Any protein that met the criteria of an FDR‐adjusted SMR p < 0.05 and a HEIDI p > 0.01 was considered to have a causal relationship with the respective PD‐related phenotype . The threshold for the p ‐value of the IVs was 5e‐08 when running SMR. To ensure the robustness of our IVs, we calculated F ‐statistics using the established formula : F = r 2 N − 2 1 − r 2 where N is the sample size and r 2 is the proportion of variance in the exposure explained by the IV. An F ‐statistic greater than 10 is commonly regarded as indicative of sufficient IV strength, thus mitigating weak instrument bias. All calculated F ‐values are presented in Table . 2.3.2 Colocalization Analysis To determine whether the observed associations between proteins and PD‐related phenotypes stemmed from a shared causal variant rather than LD, we conducted Bayesian colocalization analyses using the coloc R package . This methodology integrated both brain and plasma pQTL data with GWAS summary statistics for PD‐related traits. We evaluated five distinct hypotheses: (i) H0: no causal variant influences either the protein or PD‐related phenotypes; (ii) H1: a causal variant affects only the protein; (iii) H2: a causal variant affects only the PD phenotype; (iv) H3: distinct causal variants influence the protein and PD phenotypes independently; and (v) H4: a single causal variant affects both. For each protein, we included single nucleotide polymorphisms (SNPs) within a ± 500 kb window surrounding its pQTL region. In instances where a protein was associated with multiple pQTLs, each pQTL was analyzed separately, prioritizing those with the strongest evidence of association. A posterior probability (PP) greater than 0.8 for hypothesis H4 was considered strong evidence supporting the existence of a shared causal variant. Overall, the prioritized proteins, which were significantly identified in PWAS and had successfully passed SMR, HEIDI, and colocalization assessments, might have the potential to be the candidate targets for PD treatment. 2.3.3 Reverse MR Analysis Complementing our primary SMR and colocalization analyses, we implemented a reverse MR approach to investigate potential bidirectional causal relationships among candidate targets . In this analysis, GWAS data for PD‐related phenotypes were designated as exposures, while proteins that satisfied the PWAS, SMR, HEIDI, and colocalization thresholds from both whole‐blood and brain pQTL datasets were treated as outcomes. To ensure an adequate number of IVs for each PD‐related phenotype, we adopted a relaxed significance threshold of p < 5e‐06 and performed LD clumping to maintain LD independence ( r 2 < 0.001, window size = 10,000 kb) among the SNPs. Subsequently, we calculated F ‐statistics for each IV to assess their strength, excluding those with F ‐values < 10 to mitigate weak instrument bias. Following this, the Steiger test was conducted to verify the directionality of the associations, retaining only SNPs that explained a larger proportion of variance in the exposure compared to the outcome (Table ). This step ensured that each IV primarily influenced the outcome through its effect on the exposure. Finally, we applied a Bonferroni‐corrected significance threshold of p < (0.05/ n ), where n denotes the total number of the tested associations, with associations surpassing this threshold considered statistically significant, thereby revealing potential bidirectional causalities between PD and the candidates. SMR Analysis To rigorously validate our PWAS findings, we employed SMR to confirm both brain and plasma proteins found to be causally associated with PD‐related phenotypes. SMR integrates pQTL and GWAS summary statistics within the MR framework, which utilizes instrumental variables (IVs), genetic variants that serve as proxies for protein levels, to enable the assessment of the causal impact of protein levels on PD‐related traits . SMR is an extension of MR, and MR adheres to three core assumptions: (i) the relevance assumption, which requires a strong association between IVs and the exposure; (ii) the independence assumption, stating that IVs influence the outcome solely through the exposure; (iii) the exclusion restriction assumption, which dictates that IVs should not have a direct impact on the outcome. Unlike conventional two‐sample MR, where two independent GWAS datasets are required to estimate the causal effect between traits, SMR combines pQTL and GWAS data and utilizes the Heterogeneity in Dependent Instruments (HEIDI) test . This approach offers more robust discrimination between pleiotropic and linkage effects, reduces potential biases due to LD, and lowers the large sample size requirements often seen in standard MR methods . We adopted the SMR‐derived estimates as our primary measures of each protein's influence on PD‐related phenotypes. Given the inherent stringency of the SMR method, we applied the Benjamini–Hochberg procedure to control the FDR, thereby minimizing false positives without excessively inflating false negatives. Any protein that met the criteria of an FDR‐adjusted SMR p < 0.05 and a HEIDI p > 0.01 was considered to have a causal relationship with the respective PD‐related phenotype . The threshold for the p ‐value of the IVs was 5e‐08 when running SMR. To ensure the robustness of our IVs, we calculated F ‐statistics using the established formula : F = r 2 N − 2 1 − r 2 where N is the sample size and r 2 is the proportion of variance in the exposure explained by the IV. An F ‐statistic greater than 10 is commonly regarded as indicative of sufficient IV strength, thus mitigating weak instrument bias. All calculated F ‐values are presented in Table . Colocalization Analysis To determine whether the observed associations between proteins and PD‐related phenotypes stemmed from a shared causal variant rather than LD, we conducted Bayesian colocalization analyses using the coloc R package . This methodology integrated both brain and plasma pQTL data with GWAS summary statistics for PD‐related traits. We evaluated five distinct hypotheses: (i) H0: no causal variant influences either the protein or PD‐related phenotypes; (ii) H1: a causal variant affects only the protein; (iii) H2: a causal variant affects only the PD phenotype; (iv) H3: distinct causal variants influence the protein and PD phenotypes independently; and (v) H4: a single causal variant affects both. For each protein, we included single nucleotide polymorphisms (SNPs) within a ± 500 kb window surrounding its pQTL region. In instances where a protein was associated with multiple pQTLs, each pQTL was analyzed separately, prioritizing those with the strongest evidence of association. A posterior probability (PP) greater than 0.8 for hypothesis H4 was considered strong evidence supporting the existence of a shared causal variant. Overall, the prioritized proteins, which were significantly identified in PWAS and had successfully passed SMR, HEIDI, and colocalization assessments, might have the potential to be the candidate targets for PD treatment. Reverse MR Analysis Complementing our primary SMR and colocalization analyses, we implemented a reverse MR approach to investigate potential bidirectional causal relationships among candidate targets . In this analysis, GWAS data for PD‐related phenotypes were designated as exposures, while proteins that satisfied the PWAS, SMR, HEIDI, and colocalization thresholds from both whole‐blood and brain pQTL datasets were treated as outcomes. To ensure an adequate number of IVs for each PD‐related phenotype, we adopted a relaxed significance threshold of p < 5e‐06 and performed LD clumping to maintain LD independence ( r 2 < 0.001, window size = 10,000 kb) among the SNPs. Subsequently, we calculated F ‐statistics for each IV to assess their strength, excluding those with F ‐values < 10 to mitigate weak instrument bias. Following this, the Steiger test was conducted to verify the directionality of the associations, retaining only SNPs that explained a larger proportion of variance in the exposure compared to the outcome (Table ). This step ensured that each IV primarily influenced the outcome through its effect on the exposure. Finally, we applied a Bonferroni‐corrected significance threshold of p < (0.05/ n ), where n denotes the total number of the tested associations, with associations surpassing this threshold considered statistically significant, thereby revealing potential bidirectional causalities between PD and the candidates. PheW ‐ MR Analysis of 679 Disease Traits To evaluate the potential unintended consequences of targeting proteins implicated in PD‐related phenotypes, we conducted a PheW‐MR analysis encompassing 679 distinct disease traits. Initially, we established causal associations between our prioritized proteins and these 679 common disease traits using PheW‐MR. Subsequently, we integrated these results with the SMR findings for PD‐related phenotypes to ensure that potential side effects were not confounded by directional biases. This comprehensive approach enabled the identification of unintended consequences associated with targeting specific proteins as therapeutic interventions. For this analysis, protein–disease associations were derived from PheW‐MR evaluations across a broad spectrum of 679 diseases, each comprising more than 500 cases, as previously described by Zhou et al. These phenotypes were sourced from the UK Biobank ( N ≤ 408,961) and categorized using PheCodes. To determine the effect sizes of proteins on the 679 diseases, we performed MR. In this process, IVs were selected using a stringent significance threshold of p < 5e‐08, followed by LD clumping ( r 2 < 0.1, window size = 10,000 kb) to ensure LD independence among the selected SNPs. The effects of proteins on PD‐related phenotypes were obtained from SMR analyses linking the candidate proteins to PD‐related traits. We defined a side effect as any influence on an alternate disease trait resulting from manipulating a target protein to achieve a 20% reduction in the risk of the PD‐related phenotype. To estimate and standardize the magnitude of side effects, we adopted the following formula for the odds ratio of the effect (OR effect ) : OR effect = exp β effect where β effect = β 679 diseases β PD phenotypes × ln 0.8 Here, β 679 diseases represents the effect of the candidate proteins on the 679 diseases, with only associations having p ‐values below 0.05 included. β PD phenotypes denotes the proteins' effect on PD‐related phenotypes, derived from the PheW‐MR and SMR analyses. Proteins with OR values greater than 1 were considered to have potentially adverse side effects, whereas those with OR values less than 1 were deemed to confer beneficial side effects. p ‐values were estimated using a bootstrap method with 1 million iterations ( n = 1,000,000) and the p ‐values for the side effects were corrected using the Bonferroni method. A side effect was considered statistically significant if its p ‐value was below (0.05/ k ). Here, k represents the total number of associations between proteins and diseases that had a p < 0.05 in the PheW‐MR analysis. The associations achieving Bonferroni‐corrected significance threshold of p ‐values < (0.05/ k ), where k is the total number of protein–disease associations identified at p < 0.05, were regarded as statistically significant . Cellular Distribution‐Based Clustering of Candidate Targets Using ABA Data Given that PD predominantly affects the brain, it is imperative to elucidate the cellular distribution of genes encoding candidate target proteins across various brain regions to develop effective therapies. To refine the spatial distribution of these targets and identify co‐expression patterns within specific cell populations, we conducted cluster analysis using gene expression data from the ABA . Specifically, we utilized the Whole Human Brain 10x RNA‐seq dataset (data updated on March 30, 2024) and extracted log₂‐normalized expression matrices corresponding to our prioritized protein‐coding genes. Cluster analysis was performed based on the similarity of gene expression levels across different cell types. Hierarchical clustering was executed using the Unweighted Pair Group Method with Arithmetic Mean (UPGMA) to identify patterns of co‐expression and potential functional relationships among the genes. PPI Network and Druggability Assessment To identify synergistic interactions among targets across multiple phenotypes and facilitate the development of multi‐target therapeutics, we constructed a PPI network encompassing candidate targets associated with various PD phenotypes. We utilized the Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) database (version 12.0; http://string‐db.org ) to identify interactions among proteins implicated in both the onset and progression of PD, as determined in our preceding analyses. An interaction score threshold of ≥ 0.4 was applied to ensure a moderate level of confidence in the identified interactions. The resulting PPI network was subsequently visualized using Cytoscape (version 3.6.1; https://cytoscape.org ). For clarity, any nodes not connected to the main PPI network were excluded from the final visualization. To evaluate the feasibility of drug repurposing, we conducted a druggability assessment using real‐world drug target data from databases such as DrugBank. This assessment enabled us to identify overlaps between our identified proteins and established drug targets, as well as to explore their associated therapeutic indications. By leveraging preprocessed data from Ruiz et al. , we facilitated the evaluation of potential repurposing opportunities, thereby enhancing the clinical relevance of our candidate proteins for the treatment of Parkinson's disease. Result 3.1 PWAS for PD Progression and Onset 3.1.1 Identification of Plasma Proteins Associated With PD Progression We integrated plasma pQTL data from the deCODE study with GWAS summary statistics for three PD statuses (cognitive, motor, and composite progression) and conducted a PWAS using the OTTERS framework, encompassing 4732 proteins. Proteins were deemed significantly associated with PD progression if they met the FDR‐corrected significance threshold ( p < 0.05). Our analysis identified 42 plasma proteins associated with cognitive progression, 30 with motor progression, and 39 with composite progression (see Table and Figure ). Notably, APOE exhibited the most significant association with cognitive progression ( p = 3.12e‐14), while NSF was most significantly associated with both motor ( p = 3.90e‐21) and composite ( p = 5.42e‐10) progressions (Table ). To validate the causal associations between plasma proteins and the PD progression phenotypes, we performed SMR and HEIDI analyses. Applying stringent criteria—SMR p (FDR‐corrected) < 0.05 and HEIDI p > 0.05—we identified 12 plasma proteins causally associated with cognitive progression, 5 with motor progression, and 6 with composite progression (Table , Figure ). A subsequent colocalization analysis (PP H4 > 0.8) confirmed that, of the 12 proteins linked to cognitive progression, 10 (ALKBH3, GLO1, IDO1, SERPINA3, SORD, TPST1, GM2A, MICB, SH3BGRL3, and TGFBI) shared causal variants with PD cognitive progression loci (Table , Figure ). Among these 10 proteins, the abundance of ALKBH3 ( β = 0.482, p = 2.58e‐03), GLO1 ( β = 1.094, p = 3.64e‐03), IDO1 ( β = 1.899, p = 7.34e‐03), SERPINA3 ( β = 0.381, p = 9.87e‐03), SORD ( β = 0.521, p = 2.17e‐02), and TPST1 ( β = 0.342, p = 1.10e‐02) exhibited significant positive causal correlations with cognitive progression, whereas GM2A ( β = −0.414, p = 6.48e‐03), MICB ( β = −0.105, p = 4.79e‐02), SH3BGRL3 ( β = −0.402, p = 1.36e‐02), and TGFBI ( β = −0.247, p = 1.76e‐03) demonstrated negative correlations. Of the five proteins associated with motor progression, four (NUDT2, PLA2G12B, EVA1C, and MATN3) passed the colocalization test. Specifically, the abundance of NUDT2 ( β = 0.378, p = 1.11e‐02) and PLA2G12B ( β = 0.968, p = 2.90e‐02) correlated positively with motor progression, whereas EVA1C ( β = −1.446, p = 9.73e‐03) and MATN3 ( β = −0.160, p = 1.69e‐02) correlated negatively. Additionally, among the six proteins implicated in composite progression, three (SH3BGRL3, NANS, and RSPO3) were validated by colocalization analysis. SH3BGRL3 ( β = 0.392, p = 3.84e‐02) was positively correlated with composite progression, while NANS ( β = −1.153, p = 1.78e‐02) and RSPO3 ( β = −0.503, p = 6.19e‐03) exhibited negative correlations. Notably, SH3BGRL3 emerged as a causal factor in both cognitive and composite progressions, despite manifesting a negative correlation with the former and a positive correlation with the latter. Finally, reverse MR analyses revealed no significant bidirectional associations ( p < 0.05/114) between these proteins and the three PD progression phenotypes (Table ). 3.1.2 Identified Related Plasma Proteins for the Onset of PD Using the OTTERS framework, we conducted a PWAS on 4693 proteins in both the discovery and replication datasets to identify plasma proteins linked to PD onset. Proteins meeting the FDR‐corrected significance threshold ( p < 0.05) were considered significantly associated with PD onset. In the discovery dataset, we identified 317 proteins correlated with PD onset, while 230 proteins were implicated in the replication dataset. Notably, 54 proteins reached significance in both datasets, with NSF exhibiting the most robust association ( p discovery = 2.06e‐56, p replication = 1.87e‐224) (Table , Figure ). To further evaluate their potential causal roles in PD onset, the 54 proteins were subjected to subsequent sensitivity analyses. We applied SMR and the HEIDI test to these 54 proteins to explore their causal associations with PD onset. Overall, 22 plasma proteins demonstrated significant causal evidence for PD onset ( P SMR (FDR‐corrected ) < 0.05 and P HEIDI > 0.01) (Table ). Colocalization analysis confirmed that nine of these proteins (ARSA, EHBP1, FCGR2A, GGH, GPNMB, HDHD2, DNAJB4, HAVCR2, and PDCD1LG2) shared a common causal variant with PD onset (PP H4 > 0.8; Table , Figure ). Among these nine proteins, the abundance of six proteins, ARSA (OR = 2.177, p = 7.53e‐04), EHBP1 (OR = 3.739, p = 1.27e‐02), FCGR2A (OR = 1.059, p = 4.37e‐05), GGH (OR = 1.167, p = 2.81e‐02), GPNMB (OR = 1.503, p = 1.79e‐07), and HDHD2 (OR = 1.230, p = 1.64e‐02), was significantly associated with an elevated risk of PD onset, whereas the abundance of DNAJB4 (OR = 0.701, p = 2.54e‐03), HAVCR2 (OR = 0.905, p = 5.55e‐03), and PDCD1LG2 (OR = 0.852, p = 2.44e‐02) was associated with a reduced risk. To examine potential bidirectional causal relationships, we also performed reverse MR analyses with no significant association detected, reinforcing the robustness of the observed causal links between the identified proteins and PD onset (Table ). 3.1.3 Identified Related Proteins With Human Brain Proteomes for the Progression of PD We utilized the FUSION framework to perform PWAS analyses on brain pQTLs, evaluating the associations between 1097 proteins and the progression of PD. Our analysis identified 57 proteins associated with cognitive progression, 45 with motor progression, and 55 with composite progression ( p < 0.05). Among these, MICAL1 exhibited the strongest associations with both cognitive progression ( p = 1.38e‐03) and composite progression ( p = 1.36e‐03), while C14orf159 was most significantly associated with motor progression ( p = 2.54e‐03). Despite these findings, no proteins reached the FDR‐corrected significance threshold ( p < 0.05) for any of the PD progression phenotypes. Consequently, no brain proteins were identified as candidate targets for further sensitivity analyses (Figures and Table ). 3.1.4 Identified Related Proteins With Human Brain Proteomes for the PD Onset We employed the FUSION framework for PWAS analysis to leverage brain pQTL data in assessing the association between 1067 proteins and the onset of PD (Figure , Table ). In the discovery cohort, 99 proteins demonstrated significant associations with PD onset ( p < 0.05). After applying the FDR correction, four proteins remained significantly associated and were subsequently validated in replication cohorts ( p < 0.05). These proteins include CD38 ( p discovery = 8.27e‐09, p replication = 0.004), GPNMB ( p discovery = 1.21e‐08, p replication = 0.034), VKORC1 ( p discovery = 1.65e‐05, p replication = 0.015), and GAK ( p discovery = 3.69e‐07, p replication = 0.003). Additionally, CTSB ( p discovery = 9.47 × 10 −5 , p replication = 0.477) and ARSA ( p discovery = 7.95 × 10 −5 , p replication = 0.153) were found to be significantly associated with PD onset in the discovery cohort but did not reach the p < 0.05 threshold in the replication cohort. For the four proteins validated through PWAS, we initially performed SMR and the HEIDI test to elucidate their causal relationships with PD onset (Table ). Among these four proteins, only GPNMB and CD38 had valid IVs extracted from brain pQTL data. Consequently, we conducted SMR and HEIDI tests exclusively for these two proteins. The results revealed that both GPNMB and CD38 exhibited significant causal associations with PD onset. Specifically, the abundance of GPNMB (OR = 1.394, p = 7.73e‐07) was associated with an increased risk of PD onset, whereas the abundance of CD38 (OR = 0.415, p = 3.32e‐08) was associated with a decreased risk. Subsequent colocalization analysis using the COLOC method confirmed the associations between these two proteins and PD onset (Table ). The analysis showed that only GPNMB had PP.H4 exceeding 0.8, indicating a shared causal variant between GPNMB and PD onset. As a final sensitivity analysis, we attempted to perform a reverse MR analysis to investigate the association between GPNMB and PD onset. Unfortunately, because we only had SNPs within the GPNMB cis region, and there was no overlap with the IVs for PD onset, we did not have valid IVs for the analysis, thereby precluding the reverse MR analysis for this protein. This limitation prevents us from fully establishing the bidirectional causal relationship of this protein, necessitating further investigation in future studies. 3.1.5 Summary of Candidate Plasma and Brain Targets Related to PD Phenotypes Our analyses identified 25 candidate targets associated with PD‐related phenotypes. Among these, 16 plasma proteins were linked to PD progression. Specifically, 10 plasma proteins (ALKBH3, GLO1, IDO1, SERPINA3, SORD, TPST1, GM2A, MICB, SH3BGRL3, and TGFBI) exhibited causal relationships with cognitive progression, four proteins (NUDT2, PLA2G12B, EVA1C, and MATN3) were associated with motor progression, and three proteins (SH3BGRL3, NANS, and RSPO3) were linked to composite progression. Notably, SH3BGRL3 emerged as a causal factor for both cognitive and composite progressions. Additionally, nine plasma proteins (ARSA, EHBP1, FCGR2A, GGH, GPNMB, HDHD2, DNAJB4, HAVCR2, and PDCD1LG2) demonstrated causal relationships with PD onset. When applying the same analytical pipeline to brain proteins, we did not identify any brain‐specific candidate targets causally linked to PD progression. However, we identified one protein in brain tissue, GPNMB, as a candidate target showing a clear causal association with PD onset. Intriguingly, GPNMB was implicated in PD onset in both plasma and brain tissues. These results are summarized in Figure and Table . 3.2 PheW ‐ MR Following the identification of candidate targets associated with PD‐related phenotypes, we conducted a comprehensive analysis across 679 common disease traits to characterize the side effect profiles of each prioritized protein as a potential therapeutic target. Unlike the previous SMR approach, PheW‐MR results were standardized to reflect a 20% reduction in the risk of PD‐related phenotypes mediated by each protein. Consequently, the observed associations can be interpreted as potential side effects that may arise from therapeutically targeting these proteins. Under this 20% risk‐reduction assumption, PheW‐MR analyses ( p < 0.05/3126) identified 1529 significant beneficial side effects (83.7%) and 297 adverse side effects (16.3%) across 25 candidate targets. A paired t‐test confirmed that beneficial side effects significantly outnumbered adverse side effects ( p = 7.91e‐05), suggesting that the majority of identified side effects were beneficial (Table , Figure ). Of these 25 candidate targets, 17 exhibited exclusively beneficial side effects, while the remaining eight displayed both beneficial and adverse side effects. Among the 17 candidate targets with exclusively beneficial side effects, we focused on those that reduce the risk of four major PD progression phenotypes and demonstrated the largest number of positive outcomes. For targets mitigating PD cognitive progression, MICB exhibited the most pronounced beneficial profile, with 227 beneficial side effects primarily concentrated in the circulatory system (31 distinct traits). Regarding PD motor progression, NUDT2 was associated with 93 beneficial side effects, predominantly within the circulatory system (15 traits). For PD composite progression, SH3BGRL3 conferred 67 beneficial side effects—the highest in this category—primarily related to digestive disorders. Finally, among candidate targets for PD onset, GGH showed 72 beneficial side effects, mainly affecting musculoskeletal conditions. In contrast, the remaining eight candidate targets displayed both beneficial and adverse side effects. Notably, targeting PD cognitive progression, GM2A was linked to 155 total side effects, comprising 56 beneficial and 99 adverse effects. For PD motor progression, EVA1C yielded 51 side effects, including 40 beneficial and 11 adverse effects. Lastly, for PD onset, FCGR2A was associated with 106 side effects, including 42 beneficial and 64 adverse effects. No significant side effects were detected among the candidate targets for PD composite progression. 3.3 Cellular Distribution‐Based Clustering of Genes Corresponding to Candidate Targets To elucidate the cellular distribution of genes encoding candidate target proteins across various brain regions for the development of effective PD therapies, we retrieved gene expression matrices from the ABA covering 31 distinct brain cell types. Of the 25 proteins identified, we successfully obtained corresponding gene expression data for 24, excluding EHBP1, which lacked expression information. We then performed hierarchical clustering using the Unweighted Pair Group Method with Arithmetic Mean (UPGMA) on these 24 genes based on their expression patterns across the 31 cell types. The clustering analysis resulted in three distinct clusters. Cluster 1, comprising solely TPST1, exhibited elevated expression primarily in deep‐layer intratelencephalic and near‐projecting neurons, as well as in the mammillary body and the lower rhombic lip. Cluster 2 included GPNMB, SORD, GM2A, PDCD1LG2, MATN3, TGFBI, FCGR2A, DNAJB4, MICB, SERPINA3, IDO1, and PLA2G12B, none of which showed particularly high expression in any of the examined cell types. Cluster 3 consisted of EVA1C, GLO1, SH3BGRL3, RSPO3, GGH, NANS, ARSA, NUDT2, HAVCR2, ALKBH3, and HDHD2, all demonstrating elevated expression in metabolic and homeostatic cell populations, notably within hippocampal regions (CA1–CA3, CA4, and the dentate gyrus), deep‐layer corticothalamic neurons, and vascular cells. Detailed results of the clustering analysis are provided in Table . 3.4 PPI Network To elucidate synergistic relationships among targets across diverse PD phenotypes, we examined interactions among the 25 candidate proteins using a PPI network constructed via the STRING database (Figure ). The PPI network analysis identified a primary interaction cluster comprising FCGR2A, HAVCR2, PDCD1LG2, and IDO1, which were interconnected with MICB. Specifically, FCGR2A, HAVCR2, and PDCD1LG2 were associated with PD onset, whereas MICB and IDO1 were linked to cognitive progression in PD. Additionally, multiple pairwise interactions were observed. GLO1 and SORD, both associated with cognitive progression, formed a direct interaction pair. GM2A, also related to cognitive progression, interacted with ARSA, a candidate target for PD onset. Furthermore, our PPI analysis revealed an interaction between TGFBI and GPNMB, both of which have been implicated in PD onset. 3.5 Druggability Assessment To explore the potential for repurposing existing medications targeting the candidate proteins, we consulted the DrugBank database to identify drugs known to modulate these targets. Of the 25 proteins identified in this study, 15 correspond to established drug targets, indicating significant overlaps with treatments for various neurological and psychiatric disorders (Table ). Notably, EHBP1, SERPINA3, FCGR2A, GPNMB, MICB, RSPO3, NUDT2, and GLO1 are primarily associated with antipsychotic agents such as chlorpromazine, risperidone, and olanzapine. Additionally, IDO1 and GLO1 are targets for a range of antidepressants, including citalopram, fluoxetine, and venlafaxine. Furthermore, FCGR2A, NANS, and MATN3 are linked to corticosteroids and nonsteroidal anti‐inflammatory drugs (NSAIDs) like prednisone, ibuprofen, and naproxen. GGH and MATN3 are involved in pathways targeted by antiepileptic drugs, such as phenytoin and topiramate. GM2A is associated with both antiepileptic and sedative medications, while ARSA is implicated in treatments for dystonia and epilepsy. PWAS for PD Progression and Onset 3.1.1 Identification of Plasma Proteins Associated With PD Progression We integrated plasma pQTL data from the deCODE study with GWAS summary statistics for three PD statuses (cognitive, motor, and composite progression) and conducted a PWAS using the OTTERS framework, encompassing 4732 proteins. Proteins were deemed significantly associated with PD progression if they met the FDR‐corrected significance threshold ( p < 0.05). Our analysis identified 42 plasma proteins associated with cognitive progression, 30 with motor progression, and 39 with composite progression (see Table and Figure ). Notably, APOE exhibited the most significant association with cognitive progression ( p = 3.12e‐14), while NSF was most significantly associated with both motor ( p = 3.90e‐21) and composite ( p = 5.42e‐10) progressions (Table ). To validate the causal associations between plasma proteins and the PD progression phenotypes, we performed SMR and HEIDI analyses. Applying stringent criteria—SMR p (FDR‐corrected) < 0.05 and HEIDI p > 0.05—we identified 12 plasma proteins causally associated with cognitive progression, 5 with motor progression, and 6 with composite progression (Table , Figure ). A subsequent colocalization analysis (PP H4 > 0.8) confirmed that, of the 12 proteins linked to cognitive progression, 10 (ALKBH3, GLO1, IDO1, SERPINA3, SORD, TPST1, GM2A, MICB, SH3BGRL3, and TGFBI) shared causal variants with PD cognitive progression loci (Table , Figure ). Among these 10 proteins, the abundance of ALKBH3 ( β = 0.482, p = 2.58e‐03), GLO1 ( β = 1.094, p = 3.64e‐03), IDO1 ( β = 1.899, p = 7.34e‐03), SERPINA3 ( β = 0.381, p = 9.87e‐03), SORD ( β = 0.521, p = 2.17e‐02), and TPST1 ( β = 0.342, p = 1.10e‐02) exhibited significant positive causal correlations with cognitive progression, whereas GM2A ( β = −0.414, p = 6.48e‐03), MICB ( β = −0.105, p = 4.79e‐02), SH3BGRL3 ( β = −0.402, p = 1.36e‐02), and TGFBI ( β = −0.247, p = 1.76e‐03) demonstrated negative correlations. Of the five proteins associated with motor progression, four (NUDT2, PLA2G12B, EVA1C, and MATN3) passed the colocalization test. Specifically, the abundance of NUDT2 ( β = 0.378, p = 1.11e‐02) and PLA2G12B ( β = 0.968, p = 2.90e‐02) correlated positively with motor progression, whereas EVA1C ( β = −1.446, p = 9.73e‐03) and MATN3 ( β = −0.160, p = 1.69e‐02) correlated negatively. Additionally, among the six proteins implicated in composite progression, three (SH3BGRL3, NANS, and RSPO3) were validated by colocalization analysis. SH3BGRL3 ( β = 0.392, p = 3.84e‐02) was positively correlated with composite progression, while NANS ( β = −1.153, p = 1.78e‐02) and RSPO3 ( β = −0.503, p = 6.19e‐03) exhibited negative correlations. Notably, SH3BGRL3 emerged as a causal factor in both cognitive and composite progressions, despite manifesting a negative correlation with the former and a positive correlation with the latter. Finally, reverse MR analyses revealed no significant bidirectional associations ( p < 0.05/114) between these proteins and the three PD progression phenotypes (Table ). 3.1.2 Identified Related Plasma Proteins for the Onset of PD Using the OTTERS framework, we conducted a PWAS on 4693 proteins in both the discovery and replication datasets to identify plasma proteins linked to PD onset. Proteins meeting the FDR‐corrected significance threshold ( p < 0.05) were considered significantly associated with PD onset. In the discovery dataset, we identified 317 proteins correlated with PD onset, while 230 proteins were implicated in the replication dataset. Notably, 54 proteins reached significance in both datasets, with NSF exhibiting the most robust association ( p discovery = 2.06e‐56, p replication = 1.87e‐224) (Table , Figure ). To further evaluate their potential causal roles in PD onset, the 54 proteins were subjected to subsequent sensitivity analyses. We applied SMR and the HEIDI test to these 54 proteins to explore their causal associations with PD onset. Overall, 22 plasma proteins demonstrated significant causal evidence for PD onset ( P SMR (FDR‐corrected ) < 0.05 and P HEIDI > 0.01) (Table ). Colocalization analysis confirmed that nine of these proteins (ARSA, EHBP1, FCGR2A, GGH, GPNMB, HDHD2, DNAJB4, HAVCR2, and PDCD1LG2) shared a common causal variant with PD onset (PP H4 > 0.8; Table , Figure ). Among these nine proteins, the abundance of six proteins, ARSA (OR = 2.177, p = 7.53e‐04), EHBP1 (OR = 3.739, p = 1.27e‐02), FCGR2A (OR = 1.059, p = 4.37e‐05), GGH (OR = 1.167, p = 2.81e‐02), GPNMB (OR = 1.503, p = 1.79e‐07), and HDHD2 (OR = 1.230, p = 1.64e‐02), was significantly associated with an elevated risk of PD onset, whereas the abundance of DNAJB4 (OR = 0.701, p = 2.54e‐03), HAVCR2 (OR = 0.905, p = 5.55e‐03), and PDCD1LG2 (OR = 0.852, p = 2.44e‐02) was associated with a reduced risk. To examine potential bidirectional causal relationships, we also performed reverse MR analyses with no significant association detected, reinforcing the robustness of the observed causal links between the identified proteins and PD onset (Table ). 3.1.3 Identified Related Proteins With Human Brain Proteomes for the Progression of PD We utilized the FUSION framework to perform PWAS analyses on brain pQTLs, evaluating the associations between 1097 proteins and the progression of PD. Our analysis identified 57 proteins associated with cognitive progression, 45 with motor progression, and 55 with composite progression ( p < 0.05). Among these, MICAL1 exhibited the strongest associations with both cognitive progression ( p = 1.38e‐03) and composite progression ( p = 1.36e‐03), while C14orf159 was most significantly associated with motor progression ( p = 2.54e‐03). Despite these findings, no proteins reached the FDR‐corrected significance threshold ( p < 0.05) for any of the PD progression phenotypes. Consequently, no brain proteins were identified as candidate targets for further sensitivity analyses (Figures and Table ). 3.1.4 Identified Related Proteins With Human Brain Proteomes for the PD Onset We employed the FUSION framework for PWAS analysis to leverage brain pQTL data in assessing the association between 1067 proteins and the onset of PD (Figure , Table ). In the discovery cohort, 99 proteins demonstrated significant associations with PD onset ( p < 0.05). After applying the FDR correction, four proteins remained significantly associated and were subsequently validated in replication cohorts ( p < 0.05). These proteins include CD38 ( p discovery = 8.27e‐09, p replication = 0.004), GPNMB ( p discovery = 1.21e‐08, p replication = 0.034), VKORC1 ( p discovery = 1.65e‐05, p replication = 0.015), and GAK ( p discovery = 3.69e‐07, p replication = 0.003). Additionally, CTSB ( p discovery = 9.47 × 10 −5 , p replication = 0.477) and ARSA ( p discovery = 7.95 × 10 −5 , p replication = 0.153) were found to be significantly associated with PD onset in the discovery cohort but did not reach the p < 0.05 threshold in the replication cohort. For the four proteins validated through PWAS, we initially performed SMR and the HEIDI test to elucidate their causal relationships with PD onset (Table ). Among these four proteins, only GPNMB and CD38 had valid IVs extracted from brain pQTL data. Consequently, we conducted SMR and HEIDI tests exclusively for these two proteins. The results revealed that both GPNMB and CD38 exhibited significant causal associations with PD onset. Specifically, the abundance of GPNMB (OR = 1.394, p = 7.73e‐07) was associated with an increased risk of PD onset, whereas the abundance of CD38 (OR = 0.415, p = 3.32e‐08) was associated with a decreased risk. Subsequent colocalization analysis using the COLOC method confirmed the associations between these two proteins and PD onset (Table ). The analysis showed that only GPNMB had PP.H4 exceeding 0.8, indicating a shared causal variant between GPNMB and PD onset. As a final sensitivity analysis, we attempted to perform a reverse MR analysis to investigate the association between GPNMB and PD onset. Unfortunately, because we only had SNPs within the GPNMB cis region, and there was no overlap with the IVs for PD onset, we did not have valid IVs for the analysis, thereby precluding the reverse MR analysis for this protein. This limitation prevents us from fully establishing the bidirectional causal relationship of this protein, necessitating further investigation in future studies. 3.1.5 Summary of Candidate Plasma and Brain Targets Related to PD Phenotypes Our analyses identified 25 candidate targets associated with PD‐related phenotypes. Among these, 16 plasma proteins were linked to PD progression. Specifically, 10 plasma proteins (ALKBH3, GLO1, IDO1, SERPINA3, SORD, TPST1, GM2A, MICB, SH3BGRL3, and TGFBI) exhibited causal relationships with cognitive progression, four proteins (NUDT2, PLA2G12B, EVA1C, and MATN3) were associated with motor progression, and three proteins (SH3BGRL3, NANS, and RSPO3) were linked to composite progression. Notably, SH3BGRL3 emerged as a causal factor for both cognitive and composite progressions. Additionally, nine plasma proteins (ARSA, EHBP1, FCGR2A, GGH, GPNMB, HDHD2, DNAJB4, HAVCR2, and PDCD1LG2) demonstrated causal relationships with PD onset. When applying the same analytical pipeline to brain proteins, we did not identify any brain‐specific candidate targets causally linked to PD progression. However, we identified one protein in brain tissue, GPNMB, as a candidate target showing a clear causal association with PD onset. Intriguingly, GPNMB was implicated in PD onset in both plasma and brain tissues. These results are summarized in Figure and Table . Identification of Plasma Proteins Associated With PD Progression We integrated plasma pQTL data from the deCODE study with GWAS summary statistics for three PD statuses (cognitive, motor, and composite progression) and conducted a PWAS using the OTTERS framework, encompassing 4732 proteins. Proteins were deemed significantly associated with PD progression if they met the FDR‐corrected significance threshold ( p < 0.05). Our analysis identified 42 plasma proteins associated with cognitive progression, 30 with motor progression, and 39 with composite progression (see Table and Figure ). Notably, APOE exhibited the most significant association with cognitive progression ( p = 3.12e‐14), while NSF was most significantly associated with both motor ( p = 3.90e‐21) and composite ( p = 5.42e‐10) progressions (Table ). To validate the causal associations between plasma proteins and the PD progression phenotypes, we performed SMR and HEIDI analyses. Applying stringent criteria—SMR p (FDR‐corrected) < 0.05 and HEIDI p > 0.05—we identified 12 plasma proteins causally associated with cognitive progression, 5 with motor progression, and 6 with composite progression (Table , Figure ). A subsequent colocalization analysis (PP H4 > 0.8) confirmed that, of the 12 proteins linked to cognitive progression, 10 (ALKBH3, GLO1, IDO1, SERPINA3, SORD, TPST1, GM2A, MICB, SH3BGRL3, and TGFBI) shared causal variants with PD cognitive progression loci (Table , Figure ). Among these 10 proteins, the abundance of ALKBH3 ( β = 0.482, p = 2.58e‐03), GLO1 ( β = 1.094, p = 3.64e‐03), IDO1 ( β = 1.899, p = 7.34e‐03), SERPINA3 ( β = 0.381, p = 9.87e‐03), SORD ( β = 0.521, p = 2.17e‐02), and TPST1 ( β = 0.342, p = 1.10e‐02) exhibited significant positive causal correlations with cognitive progression, whereas GM2A ( β = −0.414, p = 6.48e‐03), MICB ( β = −0.105, p = 4.79e‐02), SH3BGRL3 ( β = −0.402, p = 1.36e‐02), and TGFBI ( β = −0.247, p = 1.76e‐03) demonstrated negative correlations. Of the five proteins associated with motor progression, four (NUDT2, PLA2G12B, EVA1C, and MATN3) passed the colocalization test. Specifically, the abundance of NUDT2 ( β = 0.378, p = 1.11e‐02) and PLA2G12B ( β = 0.968, p = 2.90e‐02) correlated positively with motor progression, whereas EVA1C ( β = −1.446, p = 9.73e‐03) and MATN3 ( β = −0.160, p = 1.69e‐02) correlated negatively. Additionally, among the six proteins implicated in composite progression, three (SH3BGRL3, NANS, and RSPO3) were validated by colocalization analysis. SH3BGRL3 ( β = 0.392, p = 3.84e‐02) was positively correlated with composite progression, while NANS ( β = −1.153, p = 1.78e‐02) and RSPO3 ( β = −0.503, p = 6.19e‐03) exhibited negative correlations. Notably, SH3BGRL3 emerged as a causal factor in both cognitive and composite progressions, despite manifesting a negative correlation with the former and a positive correlation with the latter. Finally, reverse MR analyses revealed no significant bidirectional associations ( p < 0.05/114) between these proteins and the three PD progression phenotypes (Table ). Identified Related Plasma Proteins for the Onset of PD Using the OTTERS framework, we conducted a PWAS on 4693 proteins in both the discovery and replication datasets to identify plasma proteins linked to PD onset. Proteins meeting the FDR‐corrected significance threshold ( p < 0.05) were considered significantly associated with PD onset. In the discovery dataset, we identified 317 proteins correlated with PD onset, while 230 proteins were implicated in the replication dataset. Notably, 54 proteins reached significance in both datasets, with NSF exhibiting the most robust association ( p discovery = 2.06e‐56, p replication = 1.87e‐224) (Table , Figure ). To further evaluate their potential causal roles in PD onset, the 54 proteins were subjected to subsequent sensitivity analyses. We applied SMR and the HEIDI test to these 54 proteins to explore their causal associations with PD onset. Overall, 22 plasma proteins demonstrated significant causal evidence for PD onset ( P SMR (FDR‐corrected ) < 0.05 and P HEIDI > 0.01) (Table ). Colocalization analysis confirmed that nine of these proteins (ARSA, EHBP1, FCGR2A, GGH, GPNMB, HDHD2, DNAJB4, HAVCR2, and PDCD1LG2) shared a common causal variant with PD onset (PP H4 > 0.8; Table , Figure ). Among these nine proteins, the abundance of six proteins, ARSA (OR = 2.177, p = 7.53e‐04), EHBP1 (OR = 3.739, p = 1.27e‐02), FCGR2A (OR = 1.059, p = 4.37e‐05), GGH (OR = 1.167, p = 2.81e‐02), GPNMB (OR = 1.503, p = 1.79e‐07), and HDHD2 (OR = 1.230, p = 1.64e‐02), was significantly associated with an elevated risk of PD onset, whereas the abundance of DNAJB4 (OR = 0.701, p = 2.54e‐03), HAVCR2 (OR = 0.905, p = 5.55e‐03), and PDCD1LG2 (OR = 0.852, p = 2.44e‐02) was associated with a reduced risk. To examine potential bidirectional causal relationships, we also performed reverse MR analyses with no significant association detected, reinforcing the robustness of the observed causal links between the identified proteins and PD onset (Table ). Identified Related Proteins With Human Brain Proteomes for the Progression of PD We utilized the FUSION framework to perform PWAS analyses on brain pQTLs, evaluating the associations between 1097 proteins and the progression of PD. Our analysis identified 57 proteins associated with cognitive progression, 45 with motor progression, and 55 with composite progression ( p < 0.05). Among these, MICAL1 exhibited the strongest associations with both cognitive progression ( p = 1.38e‐03) and composite progression ( p = 1.36e‐03), while C14orf159 was most significantly associated with motor progression ( p = 2.54e‐03). Despite these findings, no proteins reached the FDR‐corrected significance threshold ( p < 0.05) for any of the PD progression phenotypes. Consequently, no brain proteins were identified as candidate targets for further sensitivity analyses (Figures and Table ). Identified Related Proteins With Human Brain Proteomes for the PD Onset We employed the FUSION framework for PWAS analysis to leverage brain pQTL data in assessing the association between 1067 proteins and the onset of PD (Figure , Table ). In the discovery cohort, 99 proteins demonstrated significant associations with PD onset ( p < 0.05). After applying the FDR correction, four proteins remained significantly associated and were subsequently validated in replication cohorts ( p < 0.05). These proteins include CD38 ( p discovery = 8.27e‐09, p replication = 0.004), GPNMB ( p discovery = 1.21e‐08, p replication = 0.034), VKORC1 ( p discovery = 1.65e‐05, p replication = 0.015), and GAK ( p discovery = 3.69e‐07, p replication = 0.003). Additionally, CTSB ( p discovery = 9.47 × 10 −5 , p replication = 0.477) and ARSA ( p discovery = 7.95 × 10 −5 , p replication = 0.153) were found to be significantly associated with PD onset in the discovery cohort but did not reach the p < 0.05 threshold in the replication cohort. For the four proteins validated through PWAS, we initially performed SMR and the HEIDI test to elucidate their causal relationships with PD onset (Table ). Among these four proteins, only GPNMB and CD38 had valid IVs extracted from brain pQTL data. Consequently, we conducted SMR and HEIDI tests exclusively for these two proteins. The results revealed that both GPNMB and CD38 exhibited significant causal associations with PD onset. Specifically, the abundance of GPNMB (OR = 1.394, p = 7.73e‐07) was associated with an increased risk of PD onset, whereas the abundance of CD38 (OR = 0.415, p = 3.32e‐08) was associated with a decreased risk. Subsequent colocalization analysis using the COLOC method confirmed the associations between these two proteins and PD onset (Table ). The analysis showed that only GPNMB had PP.H4 exceeding 0.8, indicating a shared causal variant between GPNMB and PD onset. As a final sensitivity analysis, we attempted to perform a reverse MR analysis to investigate the association between GPNMB and PD onset. Unfortunately, because we only had SNPs within the GPNMB cis region, and there was no overlap with the IVs for PD onset, we did not have valid IVs for the analysis, thereby precluding the reverse MR analysis for this protein. This limitation prevents us from fully establishing the bidirectional causal relationship of this protein, necessitating further investigation in future studies. Summary of Candidate Plasma and Brain Targets Related to PD Phenotypes Our analyses identified 25 candidate targets associated with PD‐related phenotypes. Among these, 16 plasma proteins were linked to PD progression. Specifically, 10 plasma proteins (ALKBH3, GLO1, IDO1, SERPINA3, SORD, TPST1, GM2A, MICB, SH3BGRL3, and TGFBI) exhibited causal relationships with cognitive progression, four proteins (NUDT2, PLA2G12B, EVA1C, and MATN3) were associated with motor progression, and three proteins (SH3BGRL3, NANS, and RSPO3) were linked to composite progression. Notably, SH3BGRL3 emerged as a causal factor for both cognitive and composite progressions. Additionally, nine plasma proteins (ARSA, EHBP1, FCGR2A, GGH, GPNMB, HDHD2, DNAJB4, HAVCR2, and PDCD1LG2) demonstrated causal relationships with PD onset. When applying the same analytical pipeline to brain proteins, we did not identify any brain‐specific candidate targets causally linked to PD progression. However, we identified one protein in brain tissue, GPNMB, as a candidate target showing a clear causal association with PD onset. Intriguingly, GPNMB was implicated in PD onset in both plasma and brain tissues. These results are summarized in Figure and Table . PheW ‐ MR Following the identification of candidate targets associated with PD‐related phenotypes, we conducted a comprehensive analysis across 679 common disease traits to characterize the side effect profiles of each prioritized protein as a potential therapeutic target. Unlike the previous SMR approach, PheW‐MR results were standardized to reflect a 20% reduction in the risk of PD‐related phenotypes mediated by each protein. Consequently, the observed associations can be interpreted as potential side effects that may arise from therapeutically targeting these proteins. Under this 20% risk‐reduction assumption, PheW‐MR analyses ( p < 0.05/3126) identified 1529 significant beneficial side effects (83.7%) and 297 adverse side effects (16.3%) across 25 candidate targets. A paired t‐test confirmed that beneficial side effects significantly outnumbered adverse side effects ( p = 7.91e‐05), suggesting that the majority of identified side effects were beneficial (Table , Figure ). Of these 25 candidate targets, 17 exhibited exclusively beneficial side effects, while the remaining eight displayed both beneficial and adverse side effects. Among the 17 candidate targets with exclusively beneficial side effects, we focused on those that reduce the risk of four major PD progression phenotypes and demonstrated the largest number of positive outcomes. For targets mitigating PD cognitive progression, MICB exhibited the most pronounced beneficial profile, with 227 beneficial side effects primarily concentrated in the circulatory system (31 distinct traits). Regarding PD motor progression, NUDT2 was associated with 93 beneficial side effects, predominantly within the circulatory system (15 traits). For PD composite progression, SH3BGRL3 conferred 67 beneficial side effects—the highest in this category—primarily related to digestive disorders. Finally, among candidate targets for PD onset, GGH showed 72 beneficial side effects, mainly affecting musculoskeletal conditions. In contrast, the remaining eight candidate targets displayed both beneficial and adverse side effects. Notably, targeting PD cognitive progression, GM2A was linked to 155 total side effects, comprising 56 beneficial and 99 adverse effects. For PD motor progression, EVA1C yielded 51 side effects, including 40 beneficial and 11 adverse effects. Lastly, for PD onset, FCGR2A was associated with 106 side effects, including 42 beneficial and 64 adverse effects. No significant side effects were detected among the candidate targets for PD composite progression. Cellular Distribution‐Based Clustering of Genes Corresponding to Candidate Targets To elucidate the cellular distribution of genes encoding candidate target proteins across various brain regions for the development of effective PD therapies, we retrieved gene expression matrices from the ABA covering 31 distinct brain cell types. Of the 25 proteins identified, we successfully obtained corresponding gene expression data for 24, excluding EHBP1, which lacked expression information. We then performed hierarchical clustering using the Unweighted Pair Group Method with Arithmetic Mean (UPGMA) on these 24 genes based on their expression patterns across the 31 cell types. The clustering analysis resulted in three distinct clusters. Cluster 1, comprising solely TPST1, exhibited elevated expression primarily in deep‐layer intratelencephalic and near‐projecting neurons, as well as in the mammillary body and the lower rhombic lip. Cluster 2 included GPNMB, SORD, GM2A, PDCD1LG2, MATN3, TGFBI, FCGR2A, DNAJB4, MICB, SERPINA3, IDO1, and PLA2G12B, none of which showed particularly high expression in any of the examined cell types. Cluster 3 consisted of EVA1C, GLO1, SH3BGRL3, RSPO3, GGH, NANS, ARSA, NUDT2, HAVCR2, ALKBH3, and HDHD2, all demonstrating elevated expression in metabolic and homeostatic cell populations, notably within hippocampal regions (CA1–CA3, CA4, and the dentate gyrus), deep‐layer corticothalamic neurons, and vascular cells. Detailed results of the clustering analysis are provided in Table . PPI Network To elucidate synergistic relationships among targets across diverse PD phenotypes, we examined interactions among the 25 candidate proteins using a PPI network constructed via the STRING database (Figure ). The PPI network analysis identified a primary interaction cluster comprising FCGR2A, HAVCR2, PDCD1LG2, and IDO1, which were interconnected with MICB. Specifically, FCGR2A, HAVCR2, and PDCD1LG2 were associated with PD onset, whereas MICB and IDO1 were linked to cognitive progression in PD. Additionally, multiple pairwise interactions were observed. GLO1 and SORD, both associated with cognitive progression, formed a direct interaction pair. GM2A, also related to cognitive progression, interacted with ARSA, a candidate target for PD onset. Furthermore, our PPI analysis revealed an interaction between TGFBI and GPNMB, both of which have been implicated in PD onset. Druggability Assessment To explore the potential for repurposing existing medications targeting the candidate proteins, we consulted the DrugBank database to identify drugs known to modulate these targets. Of the 25 proteins identified in this study, 15 correspond to established drug targets, indicating significant overlaps with treatments for various neurological and psychiatric disorders (Table ). Notably, EHBP1, SERPINA3, FCGR2A, GPNMB, MICB, RSPO3, NUDT2, and GLO1 are primarily associated with antipsychotic agents such as chlorpromazine, risperidone, and olanzapine. Additionally, IDO1 and GLO1 are targets for a range of antidepressants, including citalopram, fluoxetine, and venlafaxine. Furthermore, FCGR2A, NANS, and MATN3 are linked to corticosteroids and nonsteroidal anti‐inflammatory drugs (NSAIDs) like prednisone, ibuprofen, and naproxen. GGH and MATN3 are involved in pathways targeted by antiepileptic drugs, such as phenytoin and topiramate. GM2A is associated with both antiepileptic and sedative medications, while ARSA is implicated in treatments for dystonia and epilepsy. Discussion Our study identified and validated latent plasma and brain targets for the onset and progression of PD by an integrative PWAS, which is an effective method in such contexts. Based on extensive pQTL data from plasma and brain tissues, comprehensive GWAS summary statistics were also utilized, resulting in the identification of 25 protein targets associated with the PD trajectory, including its onset and progression. Furthermore, we provided comprehensive insights into the therapeutic potential and safety profiles for the prioritized targets through PheW‐MR analysis, cellular distribution‐based clustering, PPI networks, and druggability assessments. Moreover, we reviewed the prioritized protein targets by accessing literature sources (including PubMed, Embase, and Google Scholar, etc.), and ALKBH3, GLO1, GM2A, IDO1, SERPINA3, TGFBI578PL, A2G12B, ARSA, FCGR2A, and GPNMB were found to be previously reported with reliable evidence , while MICB, SH3BGRL3, SORD, TPST1, EVA1C, MATN3, NUDT2, NANS, RSPO3, DNAJB4, EHBP1, GGH, HAVCR2, HDHD2, and PDCD1LG2 were novelly identified with few direct evidence from prior studies. Though it is relatively unreliable to predict most novel targets' concrete mechanisms in PD onset or progression, we focus on the novelty and investigability, from proteins to pathways, then to the phenotype and subtype. Concerning the identified progression‐related targets, ALKBH3, GLO1, IDO1, SERPINA3, SORD, TPST1, GM2A, MICB, SH3BGRL3, and TGFBI, we found them to be significantly associated with cognitive decline in the plasma of PD patients. They are involved in diverse biological processes: Expression and glycation damage of GLO1 was demonstrated to be induced by alpha‐synuclein ablation, contributing to the development of PD 40 ; IDO1 inhibition improves motor dysfunction and provides neuroprotection in PD mice ; MICB and TPST1 are novel targets identified for PD, may shape neuroinflammatory processes by modulating microglial activation in PD and mediate sulfation of key neuronal proteins modulating intracellular signaling pathways, thereby influencing dopaminergic neuron survival and accelerate the progression ; Additionally, targets such as NUDT2, PLA2G12B, EVA1C, and MATN3 were associated with motor progression, highlighting potential targets for mitigating motor dysfunction in PD. For instance, NUDT2, involved in nucleotide metabolism , and PLAG12B in PLA2 (phospholipase A2) superfamily were suggestively associated with PD, influencing neuronal membrane integrity and essential signal transduction pathways for motor function . The identification of these targets underscores the complex interplay between metabolic and inflammatory pathways in PD motor symptoms. Furthermore, SH3BGRL3 was identified as an intriguing target, demonstrating a dual role by being causally linked to both cognitive and composite progression phenotypes, albeit with contrasting directions of effect. While there is no direct evidence, this duality suggests that SH3BGRL3 may regulate multiple pathways that differentially affect various aspects of disease progression, such as influencing PD by stabilizing synaptic architecture and acting as a redox sensor, thereby protecting against α‐synuclein‐induced synaptic deficits and adjusting dysregulated intracellular signaling cascades . Aside from that, previously hinted by an MR study, GPNMB stood out as a pivotal target showing a causal relationship with increased risk of PD onset in both plasma and brain tissues . The consistent association of GPNMB across different tissues highlights its potential as a robust biomarker for early PD detection and as a promising therapeutic target to delay disease onset. The PheW‐MR analysis offered a comprehensive evaluation of the potential side effect profiles associated with the candidate targets. Impressively, 83.7% of the identified side effects were beneficial, while 16.3% were adverse. This predominance of beneficial side effects suggests that targeting these proteins may confer therapeutic advantages beyond PD, thereby enhancing the overall safety and efficacy of potential interventions. For instance, MICB's association with numerous beneficial traits within the circulatory system underscores its potential role in vascular health, which could be advantageous given the emerging evidence of vascular contributions to PD pathology . Additionally, targets such as NUDT2 and SH3BGRL3 exhibited substantial beneficial effects across various disease traits, highlighting their multifaceted therapeutic potential. Conversely, targets like GM2A and FCGR2A, which demonstrated both beneficial and adverse side effects, emphasize the necessity for cautious therapeutic modulation to balance efficacy with safety. We also conducted Cellular Distribution‐Based Clustering to elucidate the cellular distribution of genes encoding candidate target proteins across various brain regions, thereby informing the development of effective PD therapies. This analysis identified three distinct clusters, with particular emphasis on Cluster 1 and Cluster 3. Cluster 1, comprising solely TPST1, exhibited elevated expression in deep‐layer intratelencephalic and near‐projecting neurons, as well as in the mammillary body and lower rhombic lip. This specific expression profile suggests that TPST1 may play a critical role in neuronal connectivity and signaling pathways pertinent to PD onset, presenting a targeted opportunity for therapeutic intervention. Cluster 3, consisting of EVA1C, GLO1, SH3BGRL3, RSPO3, GGH, NANS, ARSA, NUDT2, HAVCR2, ALKBH3, and HDHD2, demonstrated elevated expression in metabolic and homeostatic cell populations, particularly within hippocampal regions, deep‐layer corticothalamic neurons, and vascular cells. The metabolic and homeostatic functions highlighted by Cluster 3 underscore the importance of maintaining cellular energy balance and vascular integrity in mitigating PD‐related neurodegeneration. These findings suggest that targeting metabolic pathways and supporting vascular health could be pivotal in slowing disease progression and enhancing neuronal survival. Furthermore, we conducted PPI analysis to explore synergistic relationships among targets across diverse PD phenotypes. Utilizing the STRING database for PPI network analysis, we identified a primary interaction cluster comprising FCGR2A, HAVCR2, PDCD1LG2, and IDO1, interconnected with MICB. Notably, MICB and IDO1 emerged as candidate targets associated with PD cognitive progression, while the remaining proteins were linked to PD onset. IDO1 plays a crucial role in regulating immune responses and inflammation, potentially contributing to cognitive deterioration in PD patients , whereas MICB modulates natural killer and T cell activity, suggesting a complex immune regulatory mechanism underlying cognitive impairments . Conversely, FCGR2A, HAVCR2, and PDCD1LG2 are primarily associated with PD onset, involving immune regulation and sustained inflammatory responses that may drive neurodegenerative changes . This cluster highlights candidate targets associated with PD onset and cognitive progression, suggesting that targeting these interconnected proteins could modulate both the initiation and advancement of the disease. The intricate interactions among these candidate targets reveal potential nodes for multi‐target drug development, where simultaneous modulation of interconnected proteins may enhance therapeutic efficacy and more effectively mitigate disease progression compared to single‐target approaches. Utilizing the DrugBank database, we assessed the druggability and therapeutic potential of the 25 identified candidate targets . Notably, 15 of these candidates were recognized as existing drug targets, highlighting significant opportunities for drug repurposing. Proteins such as EHBP1, SERPINA3, and GLO1 are currently targeted by antipsychotic and antidepressant medications, whereas FCGR2A and MATN3 are associated with corticosteroids and NSAIDs. This overlap indicates that existing pharmacological agents could be repurposed to modulate these proteins, thereby potentially accelerating the development of disease‐modifying therapies for PD. Our study is underpinned by several notable strengths that collectively enhance its scientific rigor and potential impact. Firstly, our research represents the first known PWAS utilizing the OTTERS method and large‐scale summary‐level pQTL data from deCODE to investigate both the onset and progression of PD. In contrast to previous PWAS studies that employed small‐sample pQTL data, we leveraged the OTTERS framework with extensive summary‐level pQTL data from deCODE. This methodological approach significantly increases statistical power, enabling the identification of a greater number of critical proteins, particularly those previously undiscovered. Consequently, this not only deepens our understanding of the mechanisms driving PD progression but also provides additional potential targets for developing disease‐modifying therapeutic strategies. Furthermore, the robustness of our findings is reinforced through rigorous validation methodologies, including SMR, colocalization analyses, and bidirectional MR. These approaches collectively ensure high confidence in the causal relationships between proteins and PD phenotypes. Additionally, our comprehensive assessment of potential side effects via PheW‐MR offers critical insights into the safety profiles of candidate targets, thereby informing the development of safer therapeutic interventions. The identification of existing drug targets among the candidates also facilitates drug repurposing, potentially accelerating the translation of our findings into clinical applications and enhancing the feasibility of novel therapeutic strategies. Despite the comprehensive nature of this PWAS, several limitations warrant consideration. First, our analyses do not encompass the entirety of the human proteome. Some proteins remain unmeasured and may also play pivotal roles in PD onset and progression, introducing the possibility of horizontal pleiotropy. Second, although our investigation incorporated both plasma and brain pQTL datasets, the absence of training sets in OTTERS for PWAS among brain proteins may reflect limitations in assay sensitivity and completeness of the entire study pipeline, partially weakening the convincement. Third, our study primarily includes individuals of Icelandic and European ancestries, where population homogeneity might reduce the generalizability of our findings to more diverse ethnic backgrounds, underscoring the need for replication efforts in multiethnic cohorts. Fourth, different proteomic platforms were employed (SOMAscan for plasma vs. mass spectrometry for brain), which may partially explain the minimal overlap of candidate targets across tissues. Harmonizing platform technologies in future studies could help identify additional shared targets. Fifth, the exclusion of EHBP1 from our downstream analyses due to missing expression data may have obscured its potential relevance in PD pathophysiology. Finally, the results of prioritized protein targets were computational hypotheses based on limited direct evidence, which are relatively primary and call for experimental validation in the future. Addressing these limitations through broader proteomic profiling, larger and more diverse cohorts, and uniform assay platforms will be vital to refining our understanding of causal protein targets and their translational potential in PD. All authors made significant contributions to this work and have approved the final manuscript. Concept and design: Chenhao Gao, Haobin Zhou, Weixuan Liang, Zhuofeng Wen, Jiewen Zhang, Chuiguo Huang, and Naijun Yuan. Data curation: Chenhao Gao, Haobin Zhou, Weixuan Liang, Zhuofeng Wen, and Chuiguo Huang. Analysis and interpretation of data: Chenhao Gao, Haobin Zhou, Weixuan Liang, Wanzhe Liao, Zhuofeng Wen, and Chuiguo Huang. Computational resources and support: Haobin Zhou, Chuiguo Huang, Jiewen Zhang, and Naijun Yuan. Writing of the original draft and reviews: Chuiguo Huang, Chenhao Gao, Wanzhe Liao, Zhixin Xie, Cailing Liao, Limin He, Jingzhang Sun, and Zhilin Chen. Editing draft and reviews: Haobin Zhou, Weixuan Liang, Zhuofeng Wen, Jiewen Zhang, Chuiguo Huang, and Naijun Yuan. Each cohort included in this study has been conducted using published studies and consortia providing publicly available summary statistics. All original studies have received ethical approval and agreed to participate, and summary‐level data were provided for analysis. The authors have nothing to report. The authors declare no conflicts of interest. Figure S1. Manhattan plot of brain protein pQTL and PD cognitive progression under the FUSION framework for PWAS. No significant associations were identified, and the top five proteins with the lowest p ‐values are highlighted. Figure S2. Manhattan plot of brain protein pQTL and PD motor progression under the FUSION framework for PWAS. No significant associations were identified, and the top five proteins with the lowest p‐values are highlighted. Figure S3. Manhattan plot of brain protein pQTL and PD cognitive progression under the FUSION framework for PWAS. No significant associations were found, and the top five proteins with the lowest p ‐values are highlighted. Table S1. Sources of human plasma and brain pQTL data and PD‐related phenotypes GWAS summary statistics. Table S2. F‐Value Statistics of SNPs in SMR Analysis. Table S3. Results of the Reverse MR Steiger Test. Table S4. Plasma proteins associated with PD progressions and onset identified through PWAS analysis. Table S5. Plasma and brain proteins associated with PD progressions and onset identified through SMR analysis. Table S6. Colocalization Results of Plasma and Brain Proteins with PD Progression and Onset. Table S7. Reverse MR Results of Plasma Proteins on PD Progression and Onset. Table S8. Brain proteins associated with PD progressions and onset identified through PWAS analysis. Table S9. Results of proteins intervention on‐target side effects identified through PheW‐MR analysis. Table S10. Whole Human Brain Gene Expression Data from 10x RNA‐seq in the ABA. Table S11. Druggability Assessment of Target Proteins. Appendix S1. STROBE‐MR checklist of recommended items to address in reports of Mendelian randomization studies. |
Methodological Approaches to Comparative Trend Analyses: The Case of Adolescent Toothbrushing | d66945f3-5393-41d1-9908-32a85ec810ea | 11757018 | Dentistry[mh] | In the dynamic landscape of public health, staying abreast of emerging trends in health and health behaviours is paramount for effective policy formulation and implementation. Health trends, characterized by developments in risk behaviours by socio-demographic factors, serve as invaluable indicators of the evolving public health policy, and has received increasing attention as a field of research . Adopting a comparative perspective on time trends enables interesting research questions about how and why health trends differ between populations. Such research questions also set strong requirements for study design, measurement, model specification, and choice of data analytic procedure. The “Health Behaviour in School-aged Children study (HBSC)” has a research design that is highly relevant for research questions about time trends in health. In the HBSC study, survey data collection is repeated every 4 years, on independent samples of new cohorts of 11– to 15–years olds from the same countries or regions. The study has a repeated structure at the country/region-level with country A, B, and C measured at several time points, but a cross-sectional structure at the individual-level. This design allows for tests about how societies change, but not how individuals change. Previous methodological papers have addressed the unique challenges related to obtaining comparability of research protocols, sampling frames, and measurement , but the required data analytic decisions have received less attention. The current study highlights the choice of analytic approach in empirical analyses of comparative trends. The generic class of regression models provide a flexible analytical framework for comparative time trend analysis. In such models, a health outcome is the dependent variable for the independent variable historical time. For a simple linear model, the trend can be summarised through a single parameter: the regression coefficient of change per time unit. Trends are not always linear, and specification of the shape of the trend is a central task in comparative trend analysis. When there are three or more cycles of data, non-linear time trends can be fitted through higher-order polynomials, including quadratic and cubic terms of time. A simple linear shape makes a direct interpretation possible, where the trend can be translated into an “increase” or “decrease” over time. When the model include quadratic and cubic effects, the trend is a composite of effects, and difficult to interpret directly form the regression model coefficients. To interpret non-linear trends, obtained model predictions can provide the necessary information. A challenge particular to comparative time trend studies is how to model and test country differences of trends. By focussing on regression model-based tests of trends, we have identified three major comparative approaches: the stratified approach, the fixed effect approach, and the random effects approach to trends. The stratified approach involves running a series of regression analyses split by country, regressing the relevant health outcome with time as the focal independent variable. A common model is specified and repeated for each country. With a dichotomous health outcome as dependent variable and time as continuous independent variable the generalized linear regression model for binomial data with a logit link becomes: logit P = ln P 1 − P = β 0 + β 1 t i m e In the fixed effect approach , the average trends and country differences of trends can be modelled through specification of main and interaction effects of time and country, where the effect of country is specified through, for example, deviation coding or simple contrast coding. With a simple contrast specification for countries A,B,C this generalized linear model becomes: logit P = ln P 1 − P = β 0 + β 1 t i m e + β 2 C o u n t r y B + β 3 C o u n t r y C + β 4 C o u n t r y B × t i m e + β 5 C o u n t r y C × t i m e where β 0 and β 1 time describe the intercept and effect of time for the reference country A, β 2 and β 3 describe country B and C differences in intercept relative to country A, and β 4 and β 5 are interaction terms describing country B and C difference in the effect of time relative to country A. In the random effects approach , the average trend is modelled as a fixed term ( β 1 t i m e ), but the country differences in such trends are parameterized through random variance components that can be functions of time. The random effects can be structured in several ways . The “societal growth curve specification” is relevant for our purpose . For a comparative repeated cross-sectional study, a three-level generalized linear mixed model can be specified, using person ( i ), country-year ( j ) and country ( k ) as levels within the model: logit P i j k = ln P i j k 1 − P i j k = β 0 i j k + β 1 k t i m e + u 0 j + v 0 k + v 1 k t i m e This three-level composite specification includes a fixed part intercept β 0 , a fixed effect of time β 1 , and a random part with three components: A random country-year-level ( j ) intercept component u 0 j ∼ N 0 , σ u 0 2 capturing fluctuations within country across years; a random country-level ( k ) intercept component v 0 k ∼ N 0 , σ v 0 2 capturing country-level differences in the dependent variable, and a country-level random slope component v 1 k ∼ N 0 , σ v 1 2 capturing between-country variation in the slope of time. The Current Study With a considerable diversity in types of research questions and available analytical approaches, there is a need to examine the relative utility and relevance of different approaches to comparative time trend analyses in applied research. In the current study, we demonstrate and compare model information and results from stratified, fixed effect and random effect approaches to comparative trends on a real-data case from the HBSC study: adolescent toothbrushing between 2006 and 2022 in 35 countries. The used data partly overlap with a previous study of trends in toothbrushing , but in the current study the primary objective is methodological. To structure the comparison between approaches, we used each approach to answer two seemingly simple research questions: Research question 1 (RQ1): Did toothbrushing show an overall linear time trend 2006–2022? Research question 2 (RQ2): Did countries/regions show different time trends? With a considerable diversity in types of research questions and available analytical approaches, there is a need to examine the relative utility and relevance of different approaches to comparative time trend analyses in applied research. In the current study, we demonstrate and compare model information and results from stratified, fixed effect and random effect approaches to comparative trends on a real-data case from the HBSC study: adolescent toothbrushing between 2006 and 2022 in 35 countries. The used data partly overlap with a previous study of trends in toothbrushing , but in the current study the primary objective is methodological. To structure the comparison between approaches, we used each approach to answer two seemingly simple research questions: Research question 1 (RQ1): Did toothbrushing show an overall linear time trend 2006–2022? Research question 2 (RQ2): Did countries/regions show different time trends? Data The Health Behaviour in School-aged Children study is a large WHO-collaborative school-based survey carried out every 4 years, among a sample of 11-, 13-, and 15-year-olds, with an even distribution of boys and girls. Respondents completed anonymous questionnaires in a class-room setting following a standardized protocol, which has been developed and updated for every survey round. The HBSC protocol is used across all participating countries, ensuring high comparability of data across an increasing number of countries over time and repeated survey rounds. In the current study only data from five of the total 11 cycles of data collection was used, covering the period 2006 to 2022. Open data can be accessed on https://www.uib.no/en/hbscdata/113290/open-access . Countries or regions that took part in all five survey rounds were included, representing a sample of N = 980,192 students from 35 countries or regions, with 50.6% girls, and balanced age category composition. The 35 countries and regions are listed in . Measures Toothbrushing was measured with a single frequency item: “ How often do you brush your teeth? ” with the five response categories (1: “ More than once a day ”; 2: “ Once a day ”; 3: “ At least once a week but not daily ,” 4: “ Less than once a week ” and 5: “ Never ”). In the analyses for the present paper the outcome was defined as “toothbrushing more than once a day ,” collapsing the four other categories to 0. Data Analysis We used R version 4.4.1 for all statistical analysis and visualization, R stats glm function for the stratified approach and the fixed effect approach, and the lme4 package glmer function for the random effects approach. Model selection was based on Likelihood ratio test (LRT) of nested models and Akaike’s information criterion (AIC) and Bayes information criterion (BIC). Model assumptions for the logistic regression models include no outliers, inclusion of all relevant independent variables, linearity across the prediction, and independence of responses. The logistic regression model with random effects also assumes normal distributed random effects. We tested logistic regression model assumptions through the random quantile residual function of the STATMOD R package . As compared to Pearsons or Deviance residuals, random quantile residuals are less affected by the scaling of the dependent variable and improves the diagnostic information from analysis of residuals from discrete outcomes . For the random effect approach, we also examined the assumption of normal distributed random effects with normal QQ-plots. For the stratified approach we used the generalized linear model for binomial data, with a logit link function, also referred to as a logistic regression model. For each region, we regressed the dependent variable toothbrushing on continuous time. Linear (M1), quadratic (M2) and cubic (M3) effects of time was entered blockwise, with one set of analyses per country or region. In all analyses, time was centred at year 2014, to reduce multicollinearity between time, time quadratic, and time cubic. Centred time was rescaled to 10-year unit, making the regression coefficient the change in toothbrushing per 10-year period. For the fixed effect approach, we modelled the average trends and country/region differences through specification of main and interaction effects of time and country/region, using deviation contrasts for country/region. Model M0 included main effects of country/region. Models M1 to M3 included linear, quadratic and cubic effects of time. To test country/region differences in trends (RQ2) we entered country/region by time, country/region by quadratic time, and country/region by cubic time (M4 to M6). Likelihood ratio test of the main effects of time allowed for the assessment of the statistical significance of an overall trend across all countries (RQ1), while the interaction time by country allow for at omnibus test of region differences in trends (RQ2). The random effects approach was implemented through a three-level generalized linear mixed regression model including a constant logistic conditional variance at the student level ( π 2 3 ), and random effects at the region-year and region level. Model M0 was a null model including a fixed intercept ( β 0 ) and random country/region-year u 0 j ∼ N 0 , σ u 0 2 and country/region v 0 k ∼ N 0 , σ v 0 2 intercept variance components. Models M1 to M3 tested fixed linear ( β 1 ), quadratic ( β 2 ) and cubic time ( β 3 ), relevant to interpret the trend shape, and the overall average trends of toothbrushing (RQ1). Model M4 included a random slope of time at the region level V 1 time v 1 k ∼ N 0 , σ v 1 2 , relevant to our research question about between-region differences in trends in toothbrushing (RQ2). R glmer function uses Laplace approximation when there are multiple levels of random effects. We extracted model-based predictions with the ggeffects package . Assumptions of normal-distributed random effects were examined with diagnostic QQ-plot from sjPlot package. The Health Behaviour in School-aged Children study is a large WHO-collaborative school-based survey carried out every 4 years, among a sample of 11-, 13-, and 15-year-olds, with an even distribution of boys and girls. Respondents completed anonymous questionnaires in a class-room setting following a standardized protocol, which has been developed and updated for every survey round. The HBSC protocol is used across all participating countries, ensuring high comparability of data across an increasing number of countries over time and repeated survey rounds. In the current study only data from five of the total 11 cycles of data collection was used, covering the period 2006 to 2022. Open data can be accessed on https://www.uib.no/en/hbscdata/113290/open-access . Countries or regions that took part in all five survey rounds were included, representing a sample of N = 980,192 students from 35 countries or regions, with 50.6% girls, and balanced age category composition. The 35 countries and regions are listed in . Toothbrushing was measured with a single frequency item: “ How often do you brush your teeth? ” with the five response categories (1: “ More than once a day ”; 2: “ Once a day ”; 3: “ At least once a week but not daily ,” 4: “ Less than once a week ” and 5: “ Never ”). In the analyses for the present paper the outcome was defined as “toothbrushing more than once a day ,” collapsing the four other categories to 0. We used R version 4.4.1 for all statistical analysis and visualization, R stats glm function for the stratified approach and the fixed effect approach, and the lme4 package glmer function for the random effects approach. Model selection was based on Likelihood ratio test (LRT) of nested models and Akaike’s information criterion (AIC) and Bayes information criterion (BIC). Model assumptions for the logistic regression models include no outliers, inclusion of all relevant independent variables, linearity across the prediction, and independence of responses. The logistic regression model with random effects also assumes normal distributed random effects. We tested logistic regression model assumptions through the random quantile residual function of the STATMOD R package . As compared to Pearsons or Deviance residuals, random quantile residuals are less affected by the scaling of the dependent variable and improves the diagnostic information from analysis of residuals from discrete outcomes . For the random effect approach, we also examined the assumption of normal distributed random effects with normal QQ-plots. For the stratified approach we used the generalized linear model for binomial data, with a logit link function, also referred to as a logistic regression model. For each region, we regressed the dependent variable toothbrushing on continuous time. Linear (M1), quadratic (M2) and cubic (M3) effects of time was entered blockwise, with one set of analyses per country or region. In all analyses, time was centred at year 2014, to reduce multicollinearity between time, time quadratic, and time cubic. Centred time was rescaled to 10-year unit, making the regression coefficient the change in toothbrushing per 10-year period. For the fixed effect approach, we modelled the average trends and country/region differences through specification of main and interaction effects of time and country/region, using deviation contrasts for country/region. Model M0 included main effects of country/region. Models M1 to M3 included linear, quadratic and cubic effects of time. To test country/region differences in trends (RQ2) we entered country/region by time, country/region by quadratic time, and country/region by cubic time (M4 to M6). Likelihood ratio test of the main effects of time allowed for the assessment of the statistical significance of an overall trend across all countries (RQ1), while the interaction time by country allow for at omnibus test of region differences in trends (RQ2). The random effects approach was implemented through a three-level generalized linear mixed regression model including a constant logistic conditional variance at the student level ( π 2 3 ), and random effects at the region-year and region level. Model M0 was a null model including a fixed intercept ( β 0 ) and random country/region-year u 0 j ∼ N 0 , σ u 0 2 and country/region v 0 k ∼ N 0 , σ v 0 2 intercept variance components. Models M1 to M3 tested fixed linear ( β 1 ), quadratic ( β 2 ) and cubic time ( β 3 ), relevant to interpret the trend shape, and the overall average trends of toothbrushing (RQ1). Model M4 included a random slope of time at the region level V 1 time v 1 k ∼ N 0 , σ v 1 2 , relevant to our research question about between-region differences in trends in toothbrushing (RQ2). R glmer function uses Laplace approximation when there are multiple levels of random effects. We extracted model-based predictions with the ggeffects package . Assumptions of normal-distributed random effects were examined with diagnostic QQ-plot from sjPlot package. shows the sample frequency of toothbrushing twice or more often daily per country or region, collapsed over study cycles. Stratified Approach Prior to statistical analysis we inspected the descriptive frequencies of toothbrushing per country and region over time, as shown in . We note different patterns across countries. shows the results of 35 blockwise logistic regression models with toothbrushing as the dependent variable regressed on time, time-quadratic and time-cubic in the stratified approach, with three model blocks (models M1 to M3) per country/region. shows the model summary statistics Deviance, BIC, AIC and LRT model comparisons for the 35 sets of analyses. Model diagnostics of quantile residuals for model M3 in the stratified approach revealed no patterns with the linear predictor , and the normal QQ plot suggested no residual deviation for any country/region . The LRT difference between models informs about the shape and magnitude of trends, and post hoc we used the information to summarize different trend patterns. The profile of trends in the stratified approach is shown in . For two countries there were no statistically significant trends (Austria, Netherlands). Four countries (panel B) showed linear trends only (Estonia, Croatia, Hungary, Sweden). For eight countries (Finland, England, Ireland Iceland, Luxembourg, Latvia, Slovenia, and Slovakia) there were statistically significant linear and quadratic blocks (panel C), and for twelve countries blocks of linear, quadratic and cubic components were all statistically significant [panel D: Belgium (VLG), Canada, Switzerland, Czech Republic, Spain, France, Scotland, Greenland, Lithuania, North Macedonia, Portugal, Romania]. Six countries (panel E) showed a pattern of statistically significant linear effects, non-significant quadratic effects, and significant cubic effects [Belgium (WAL), Denmark, Greece, Israel, Italy and Norway]. Three countries (panel F) showed a pattern with significant blocks of quadratic and cubic terms but without a significant linear block (Wales, Poland and Germany). To summarize, the stratified approach showed a strong diversity of trend shape, with few countries showing a monotone linear trend, but most countries showed a composite positive trend in toothbrushing. Fixed Effect Approach shows the model summary for the fixed effect approach. The likelihood ratio test of the difference between nested models revealed statistically significant increments in model fit for linear time (M1), quadratic time (M2) and cubic time (M3), as well as interactions country/region by linear time (M4), country/region by quadratic time (M5), and country/region by cubic time (M6). AIC and the LRT suggested M6 to be the best model, whereas BIC identified model M4 as the best fitting model. The results of model M6 of the fixed effect approach suggested a linear, quadratic and cubic component in the overall trends, and that linear, quadratic and cubic components were different across countries and regions. include model diagnostics for model M6 fixed effect approach. The quantile residuals for model M6 were constant across the linear prediction (panel A). Residuals did not vary as a function of time (panel B) or country/region (panel C). The quantile-quantile plot (panel D) suggested that there were no outlying cases. shows the model coefficients for model M6 for the fixed effect approach. As the trend has three components the single regression coefficients convey limited information about the total trend for a country or region. The model coefficients for time show that at time 0 the mean linear growth rate per decade is 0.19, but the negative quadratic effect of −0.13 and cubic effect of −0.10 indicate that the average growth rate changed across time, levelling off over time. This indicate that the overall trend was non-linear. The deviation contrasts for the main effect, represent each country/regions difference to the mean level of toothbrushing at time 0 (in our example: 2014), here in logit unit. The B/SE ratio for each deviation contrast is the test statistic for the hypothesis that the specific country/region contrast is different from the mean intercept, or from the mean linear component, the mean quadratic component or the mean cubic component. More specific information about the trends and the differences in trends for specific countries were obtained through model-based predictions and relevant linear composites, and we illustrate these predictions and the random effect predictions in the next section on random effects ( , panels A, B). Random Effects Approach shows the model summaries results from the random effects approach. The likelihood ratio test of nested model differences indicated statistically significant linear (M1) and quadratic (M2) components, but not a cubic component (M3). For model M4, the inclusion of a random slope (V 1time ) and a slope-intercept covariance (COV01) on two degrees of freedom led to a statistically significant better model fit, suggesting that the slope of time varied across countries. The statistical inference on the added random slope is only approximate, as we do not have a restricted maximum likelihood for the logistic mixed model. Based on these results, we tested a trimmed version of the model M4 without cubic effects, model M4b. The BIC for this model was the smallest of all models. show model coefficients for the selected model M4b, with a fixed part and a random part. The fixed part indicated that for the average country (at random effects = 0) there was positive linear trend component β 1 = 0.134 and the negative quadratic component β 2 = −0.125 indicated that the positive trend levelled of as a function of time. The random intercept SD (U 0 ) = 0.079 at country/region-year level, suggested that toothbrushing fluctuate within a prediction interval + −0.079*1.96 = [−0.158 to 0.158] logit units, relative to the linear slope of a country. The random intercept SD at the region-level (V 0 ) = 0.421 indicated that for an average country, adolescents’ prevalence of toothbrushing would fall within the 95% prediction interval [−0.02 to 1.63], which after logit transformation to probabilities implies a prevalence of “toothbrushing more than once a day” to vary between 49% and 83% at time 0 (year 2014). The random slope of linear time (V1) with an SD of 0.122, suggested that the for the population of countries the slope of the linear component would fall within −0.134+−0.122*1.96, giving a 95% prediction interval in logit units for the linear slope of linear time [−0.11, 0.37]. The negative corelation between intercept and slope means that countries with a low level of toothbrushing tended to have a stronger positive slope of time. We also computed model-based predictions for each specific country/region, relevant for specific inference about the differences in trends. A quantile-quantile plot for each random variance component indicated a close fit to a normal distribution .for both country/region and country/region-year level, as shown in . shows the model-based predicted probability of toothbrushing as a function of time for the best fitting models of the fixed effect approach and the random effect approach. The upper half of the figure shows results from the fixed effect approach model M6 (panels A and B), and the lower half shows the results for the random effects approach model M4b (panels C and D). The confidence intervals for the average trend were notably slimmer for the fixed effect approach. The fixed effect approach and the random effect approach predicted a group of countries and regions with a higher level of toothbrushing, and no apparent trend. For the number of regions that started with a low to medium level of toothbrushing the prediction was a trend of increased toothbrushing. The average marginal effect showed that the trend is positive but flattening. summarises the model information and findings from the three approaches included. Prior to statistical analysis we inspected the descriptive frequencies of toothbrushing per country and region over time, as shown in . We note different patterns across countries. shows the results of 35 blockwise logistic regression models with toothbrushing as the dependent variable regressed on time, time-quadratic and time-cubic in the stratified approach, with three model blocks (models M1 to M3) per country/region. shows the model summary statistics Deviance, BIC, AIC and LRT model comparisons for the 35 sets of analyses. Model diagnostics of quantile residuals for model M3 in the stratified approach revealed no patterns with the linear predictor , and the normal QQ plot suggested no residual deviation for any country/region . The LRT difference between models informs about the shape and magnitude of trends, and post hoc we used the information to summarize different trend patterns. The profile of trends in the stratified approach is shown in . For two countries there were no statistically significant trends (Austria, Netherlands). Four countries (panel B) showed linear trends only (Estonia, Croatia, Hungary, Sweden). For eight countries (Finland, England, Ireland Iceland, Luxembourg, Latvia, Slovenia, and Slovakia) there were statistically significant linear and quadratic blocks (panel C), and for twelve countries blocks of linear, quadratic and cubic components were all statistically significant [panel D: Belgium (VLG), Canada, Switzerland, Czech Republic, Spain, France, Scotland, Greenland, Lithuania, North Macedonia, Portugal, Romania]. Six countries (panel E) showed a pattern of statistically significant linear effects, non-significant quadratic effects, and significant cubic effects [Belgium (WAL), Denmark, Greece, Israel, Italy and Norway]. Three countries (panel F) showed a pattern with significant blocks of quadratic and cubic terms but without a significant linear block (Wales, Poland and Germany). To summarize, the stratified approach showed a strong diversity of trend shape, with few countries showing a monotone linear trend, but most countries showed a composite positive trend in toothbrushing. shows the model summary for the fixed effect approach. The likelihood ratio test of the difference between nested models revealed statistically significant increments in model fit for linear time (M1), quadratic time (M2) and cubic time (M3), as well as interactions country/region by linear time (M4), country/region by quadratic time (M5), and country/region by cubic time (M6). AIC and the LRT suggested M6 to be the best model, whereas BIC identified model M4 as the best fitting model. The results of model M6 of the fixed effect approach suggested a linear, quadratic and cubic component in the overall trends, and that linear, quadratic and cubic components were different across countries and regions. include model diagnostics for model M6 fixed effect approach. The quantile residuals for model M6 were constant across the linear prediction (panel A). Residuals did not vary as a function of time (panel B) or country/region (panel C). The quantile-quantile plot (panel D) suggested that there were no outlying cases. shows the model coefficients for model M6 for the fixed effect approach. As the trend has three components the single regression coefficients convey limited information about the total trend for a country or region. The model coefficients for time show that at time 0 the mean linear growth rate per decade is 0.19, but the negative quadratic effect of −0.13 and cubic effect of −0.10 indicate that the average growth rate changed across time, levelling off over time. This indicate that the overall trend was non-linear. The deviation contrasts for the main effect, represent each country/regions difference to the mean level of toothbrushing at time 0 (in our example: 2014), here in logit unit. The B/SE ratio for each deviation contrast is the test statistic for the hypothesis that the specific country/region contrast is different from the mean intercept, or from the mean linear component, the mean quadratic component or the mean cubic component. More specific information about the trends and the differences in trends for specific countries were obtained through model-based predictions and relevant linear composites, and we illustrate these predictions and the random effect predictions in the next section on random effects ( , panels A, B). shows the model summaries results from the random effects approach. The likelihood ratio test of nested model differences indicated statistically significant linear (M1) and quadratic (M2) components, but not a cubic component (M3). For model M4, the inclusion of a random slope (V 1time ) and a slope-intercept covariance (COV01) on two degrees of freedom led to a statistically significant better model fit, suggesting that the slope of time varied across countries. The statistical inference on the added random slope is only approximate, as we do not have a restricted maximum likelihood for the logistic mixed model. Based on these results, we tested a trimmed version of the model M4 without cubic effects, model M4b. The BIC for this model was the smallest of all models. show model coefficients for the selected model M4b, with a fixed part and a random part. The fixed part indicated that for the average country (at random effects = 0) there was positive linear trend component β 1 = 0.134 and the negative quadratic component β 2 = −0.125 indicated that the positive trend levelled of as a function of time. The random intercept SD (U 0 ) = 0.079 at country/region-year level, suggested that toothbrushing fluctuate within a prediction interval + −0.079*1.96 = [−0.158 to 0.158] logit units, relative to the linear slope of a country. The random intercept SD at the region-level (V 0 ) = 0.421 indicated that for an average country, adolescents’ prevalence of toothbrushing would fall within the 95% prediction interval [−0.02 to 1.63], which after logit transformation to probabilities implies a prevalence of “toothbrushing more than once a day” to vary between 49% and 83% at time 0 (year 2014). The random slope of linear time (V1) with an SD of 0.122, suggested that the for the population of countries the slope of the linear component would fall within −0.134+−0.122*1.96, giving a 95% prediction interval in logit units for the linear slope of linear time [−0.11, 0.37]. The negative corelation between intercept and slope means that countries with a low level of toothbrushing tended to have a stronger positive slope of time. We also computed model-based predictions for each specific country/region, relevant for specific inference about the differences in trends. A quantile-quantile plot for each random variance component indicated a close fit to a normal distribution .for both country/region and country/region-year level, as shown in . shows the model-based predicted probability of toothbrushing as a function of time for the best fitting models of the fixed effect approach and the random effect approach. The upper half of the figure shows results from the fixed effect approach model M6 (panels A and B), and the lower half shows the results for the random effects approach model M4b (panels C and D). The confidence intervals for the average trend were notably slimmer for the fixed effect approach. The fixed effect approach and the random effect approach predicted a group of countries and regions with a higher level of toothbrushing, and no apparent trend. For the number of regions that started with a low to medium level of toothbrushing the prediction was a trend of increased toothbrushing. The average marginal effect showed that the trend is positive but flattening. summarises the model information and findings from the three approaches included. The objective of the current study was to compare the information returned from stratified, fixed effect and random effect approaches to comparative time trends in toothbrushing. The type of information and results returned from the analyses was different for the three approaches. The stratified approach provided a high level of detail about each country/region trend, but did not provide statistical tests of direct relevance to our two research questions. To answer our research questions on the overall trend and the differences in trend we used an implicit, but non-parametric approach by counting and ranking the number of statistically significant trends, a procedure sometimes referred to as “vote counting.” Our post hoc classification of trend profiles indicated variation across countries and an overall upwards trend, but the count of profiles does not represent a statistical inference. For the fixed effect approach and the random effects approach, our two research questions could be operationalised as testable hypothesis about model parameters, either expressed as fixed effects or as random effects. Both approaches provide omnibus tests as well as specific country-level inference about effects, and their conclusions overlapped but were not identical. For both approaches the omnibus tests concluded with a non-linear positive but gradually flattening trend in toothbrushing, and both approaches concluded with cross-national differences in the trends. The fixed effect approach included tests country/region by time, country/region by time quadratic, and country/region by time cubic, and provided a more detailed perspective of trend in each country/region. In studies of region differences in trends, region-specific conclusions are of key interest to the researcher, and the fixed effect approach can to high degree provide relevant information, however this level of specificity comes at the cost of model complexity. The most comprehensive models (M6 fixed effect approach), included 136 country/region contrasts, which at least from a practical perspective, is high. The random effect approach did not include separate fixed estimates for the specific countries/regions, but produced country/region-specific conditional predictions. In a context with many countries and many time points, the specified random slope of time provide a flexible yet parsimonious approach to modelling cross-national differences in the trend. Compared to fixed effect approach, the results from the random effect approach suggested a simpler parametric shape for the overall time trend, as the cubic main effects (model M3) did not achieve statistical significance. The subtle differences in conclusion on the shape of the overall trend between the fixed effect approach and the random effects approach might reflect specification differences. The country/region-year random component (U oj ) models random fluctuations across time, thus reducing the need for to include fixed part non-linear components for each country. Conceptually, the provision of a random country-year component can be important, by separating longer term linear trends from short term societal changes that do not follow a parametric linear curve, and therefore might reflect different underlying societal processes. Under the current sample size and number of countries, key model assumptions were satisfied in all three approaches. However, the model assumptions of the three approaches have different sensitivity to number of country/region units included. The stratified and fixed effects approach can be conducted with 5 countries and with 35 countries without expected violations of model assumptions. For the random effects approach random variance components and standard errors of estimates tend to be downward biased when the number of higher units is small . Under the current frequentist approach, 35 countries or region units is just above the recommended limit of at least 30 countries to achieve accurate estimates of the logistic mixed model . If the number of country units is smaller, Bayesian computation of random country-level effects is a relevant alternative as this method has less bias in small sample situations , but the Bayesian computation require researchers to make additional assumptions about the prior distribution. Limitations We only considered polynomial specification of time. This specification may work well to capture non-linearity within a specified time frame but be less accurate in long term projections. Decisions about trend shape need to consider both the number of time points with observations and the length of the period covered. If events have occurred during the period covered, such as sudden technological innovations, macroeconomic shocks, changes in health legislation, or pandemics, a piecewise model or simple contrasts as an extension of the simple linear trend could be relevant alternative specifications to quadratic and cubic effects. In piecewise models the slope of a linear time effects can change at a given time point, allowing for an overall non-linear trend and turning points. Generalized additive models and generalized additive mixed models provide a general regression framework for non-linear modelling of trends. Secondly, omission of unmeasured time-invariant or time-varying independent variables might bias regression trend estimates. Unmeasured third variables at the country/region level might particularly affect the random effect approach to trend analysis, as the random region-level effect will include the effects of such unmeasured variables. For the fixed effect approach, conditioning on the main effect of region account for region level third-variables. The stratified approach might be least vulnerable to omission of region-level factors, as relevant third variables are restricted to those affecting the within-country context. As a basic strategy to minimize endogeneity, comparative time trend studies can counteract bias by collecting information on country/region indicators and include that information as covariates in the model. Our comparison of approaches was conducted on a set of descriptive research questions, which represent an important first stage in trend analysis. Future research should examine how the three approaches can be extended to explanatory trend analysis with country-level moderators and mediators of comparative trends, where two-stage modelling and hybrid random effects model might provide relevant example starting points for a comparison. Conclusion We compared the model information and results obtained from stratified, fixed effect, and random effect approaches to comparative trend analyses of adolescent toothbrushing. Our case clearly demonstrated that statistical inference about average time trends and trend differences is lacking with a stratified approach. For statistical inference regarding the trend and trend differences, the fixed effect approach provided a high level of specificity. The random effects approach produced similar conclusions, but with less detail and specificity in the trend for each country. We only considered polynomial specification of time. This specification may work well to capture non-linearity within a specified time frame but be less accurate in long term projections. Decisions about trend shape need to consider both the number of time points with observations and the length of the period covered. If events have occurred during the period covered, such as sudden technological innovations, macroeconomic shocks, changes in health legislation, or pandemics, a piecewise model or simple contrasts as an extension of the simple linear trend could be relevant alternative specifications to quadratic and cubic effects. In piecewise models the slope of a linear time effects can change at a given time point, allowing for an overall non-linear trend and turning points. Generalized additive models and generalized additive mixed models provide a general regression framework for non-linear modelling of trends. Secondly, omission of unmeasured time-invariant or time-varying independent variables might bias regression trend estimates. Unmeasured third variables at the country/region level might particularly affect the random effect approach to trend analysis, as the random region-level effect will include the effects of such unmeasured variables. For the fixed effect approach, conditioning on the main effect of region account for region level third-variables. The stratified approach might be least vulnerable to omission of region-level factors, as relevant third variables are restricted to those affecting the within-country context. As a basic strategy to minimize endogeneity, comparative time trend studies can counteract bias by collecting information on country/region indicators and include that information as covariates in the model. Our comparison of approaches was conducted on a set of descriptive research questions, which represent an important first stage in trend analysis. Future research should examine how the three approaches can be extended to explanatory trend analysis with country-level moderators and mediators of comparative trends, where two-stage modelling and hybrid random effects model might provide relevant example starting points for a comparison. We compared the model information and results obtained from stratified, fixed effect, and random effect approaches to comparative trend analyses of adolescent toothbrushing. Our case clearly demonstrated that statistical inference about average time trends and trend differences is lacking with a stratified approach. For statistical inference regarding the trend and trend differences, the fixed effect approach provided a high level of specificity. The random effects approach produced similar conclusions, but with less detail and specificity in the trend for each country. |
Primary school children’s oral hygiene knowledge assessed with different educational methods: a cross-sectional study | f018c074-c944-4422-ac14-3f553e2ec8cd | 11773895 | Dentistry[mh] | Learning in children is an active and socio-cognitive activity . In this complex learning process, various methods such as lectures, brochures, and videos are employed in education . In the modern era, it is believed that educational methods for children should be engaging and utilize communication tools that children are familiar with, such as electronic devices . A study emphasizes that a multimedia teaching environment, an innovative method, is effective in enhancing children’s learning capacity. It is also stated that initiating the oral and dental health protection and prevention program with multimedia tools is more beneficial for school-aged children. Therefore, it is necessary to investigate the contribution of various educational methods to the knowledge adopted in oral and dental health education . Oral and dental health education begins with the education and practices of parents, who can further contribute through oral and dental health promotion programs . Dentists and dental hygienists play a fundamentally important role in promoting adequate concepts in this field . Education provided by experts helps children change their oral hygiene habits and maintain good oral health . Traditional education styles, supported by visual aids such as dental models, are considered crucial for developing oral and dental health knowledge due to their long-term impact on the target audience . To encourage children’s understanding, various methods, including videos and animations, are employed by animating static visual aids . It is reported that education provided through animation techniques will make it easier to explain complex concepts and thus make them easier to understand and remember . Peer-led education, one of the methods that enables children to absorb information more easily, is defined as an education program in which students of similar ages teach their peers and holds an important place in the literature . One study suggests that peer leaders are as effective as, or even more effective than, teachers when communicating with children . Additionally, with the advancement of technology and the increasing use of social media tools, studies have reported that Instagram is a platform frequently used by students for educational activities and that reels videos are widely utilized . Various educational tools, such as brochures, videos, oral presentations, and animations, have been employed in studies providing oral and dental health education to children in the literature . However, no study has yet been identified that incorporates peer-led reels videos in this context. Considering the various effects of technological advancements and the educational methods available to children in the current era, we believe that a peer-led reels video providing information about oral and dental health may be effective. The aim of this study is to evaluate the effects of different educational methods on the knowledge levels of primary school children regarding oral hygiene. The first null hypothesis is that there will be no difference in the level of knowledge acquired by children about oral hygiene in terms of education with verbal explanation, animation video and peer-les reels video. The second null hypothesis is that gender and tablet/mobile phone use will have no effect on the knowledge acquired. Ethical approval The current study was approved by the Clinical Research Ethics Committee at the Medical School of Tokat Gaziosmanpaşa University (Approval No. 21-KAEK-275, dated 30.12.2021). The study was conducted in accordance with the guidelines of the Helsinki Declaration and adhered to the Consolidated Standards of Reporting Trials (CONSORT) guidelines. Sample size calculation The sample size was estimated using G Power software v.3.1.9.2. A minimum of 592 children was required to detect a significant difference using the “ANCOVA: Fixed Effects, Main Effects, and Interactions” test, with a type I error (α) of 0.05, power (1-beta) of 95%, and effect size of 0.162 . Study design This cross-sectional study included 5th grade students aged 10–12 years from primary schools located in the center of Tokat, Turkey. All 28 primary schools in Tokat’s central district were contacted with the permission of the Provincial Directorate of National Education, and the purpose of the study was explained to the school principals. Fifteen schools equipped with visual communication devices and sound systems were selected. Informed consent forms were sent to the parents of students in these selected schools. The study was conducted between January and June 2022 in conference halls or classrooms within the schools. Inclusion criteria for the study were 5th grade primary school children without any mental, visual, or auditory disabilities, and whose parents had provided written consent for their participation. The study questions were developed by 4 expert pediatric dentists following a comprehensive literature review . The questions were designed to to assess the children’s knowledge of oral hygiene. In order to ensure the validity of the questions, the opinions of 8 additional expert paediatric dentists were obtained. The ratings of the expert opinions in the Lawshe technique were graded as ‘Appropriate’, ‘Appropriate but should be corrected’ and ‘Should be removed’. The experts were asked to tick one of the above ratings for each of the 8 items in the form. In order to calculate the content validity rates of the scale, ‘Appropriate’ was scored as 3, ‘Appropriate but should be corrected’ as 2 and ‘Should be removed’ as 1. In order to determine the content validity of the items to be included in the scale, the qualitative data obtained in line with the expert opinions were converted into quantitative data by calculating the content validity rate and content validity index. In this transformation process, first the content validity ratio and then the content validity index were calculated. Calculations were made with Microsoft Excel 2016 programme. The calculation confirmed their conformity with a coefficient of 0.78 . The questionnaire used was developed for this study. The questionnaire was divided into two parts. The first part collected demographic datas, such as age, gender, and tablet/mobile phone usage. The second part consisted of 8 questions assessing general knowledge about oral hygiene (Table ). Each correct answer was scored as 1 point, while incorrect and unanswered questions received 0 points. A total of 490 students who completed the baseline questionnaire were randomly assigned to one of three groups: verbal explanation group, animation group, or peer-led reels group (Fig. ). For randomisation, the completed questionnaires were numbered and participants were assigned to groups by making a table of random numbers in microsoft excel 2016. In the verbal explanation group, a researcher wearing a white coat provided a one-time, 3-minute oral hygiene education session using a jaw model. In the animation group, oral hygiene education was presented to the children once as a cartoon video lasting 1 min and 17 s (via Windows Media Player). The animation video was created by the researcher using the test version of “ www.vyond.com ”. The animation featured a character voiced by one of the researchers, along with informative text and background music. The character conveyed oral hygiene information through both voice narration and text content. In the peer-led reels group, oral hygiene information was presented for 1 min through a video created on the Instagram platform. The video was played once via Windows Media Player. The video depicted a child of a similar age to the target audience receiving oral hygiene information, followed by the child demonstrating the behavior, with background music. The video included some text, but did not feature the child’s voice or information. The education given in all three intervention groups was prepared in a way to match the questionnaire questions one-to-one. All educational methods included the same information about general oral hygiene. Following the education, the children completed the same questionnaire again. The knowledge acquired pre- and post-education was evaluated according to the three intervention groups, gender and tablet/mobile phone use. Statistical analysis Data analysis was conducted using the IBM Statistical Package for the Social Sciences (SPSS for Windows, version 26.0, SPSS Inc., Chicago, IL, USA). Descriptive statistics including number and percentage for categorical data and mean and standard deviation for continuous data were calculated. The normality of the data was assessed using the Kolmogorov-Smirnov test. The Wilcoxon test was used to compare the knowledge levels before and after oral hygiene education, while the analysis of covariance (ANCOVA) was used to evaluate the effects of group, gender, and tablet/mobile phone usage on the knowledge level at the end of the oral hygiene education. A p -value of < 0.05 was considered statistically significant in all tests. The current study was approved by the Clinical Research Ethics Committee at the Medical School of Tokat Gaziosmanpaşa University (Approval No. 21-KAEK-275, dated 30.12.2021). The study was conducted in accordance with the guidelines of the Helsinki Declaration and adhered to the Consolidated Standards of Reporting Trials (CONSORT) guidelines. The sample size was estimated using G Power software v.3.1.9.2. A minimum of 592 children was required to detect a significant difference using the “ANCOVA: Fixed Effects, Main Effects, and Interactions” test, with a type I error (α) of 0.05, power (1-beta) of 95%, and effect size of 0.162 . This cross-sectional study included 5th grade students aged 10–12 years from primary schools located in the center of Tokat, Turkey. All 28 primary schools in Tokat’s central district were contacted with the permission of the Provincial Directorate of National Education, and the purpose of the study was explained to the school principals. Fifteen schools equipped with visual communication devices and sound systems were selected. Informed consent forms were sent to the parents of students in these selected schools. The study was conducted between January and June 2022 in conference halls or classrooms within the schools. Inclusion criteria for the study were 5th grade primary school children without any mental, visual, or auditory disabilities, and whose parents had provided written consent for their participation. The study questions were developed by 4 expert pediatric dentists following a comprehensive literature review . The questions were designed to to assess the children’s knowledge of oral hygiene. In order to ensure the validity of the questions, the opinions of 8 additional expert paediatric dentists were obtained. The ratings of the expert opinions in the Lawshe technique were graded as ‘Appropriate’, ‘Appropriate but should be corrected’ and ‘Should be removed’. The experts were asked to tick one of the above ratings for each of the 8 items in the form. In order to calculate the content validity rates of the scale, ‘Appropriate’ was scored as 3, ‘Appropriate but should be corrected’ as 2 and ‘Should be removed’ as 1. In order to determine the content validity of the items to be included in the scale, the qualitative data obtained in line with the expert opinions were converted into quantitative data by calculating the content validity rate and content validity index. In this transformation process, first the content validity ratio and then the content validity index were calculated. Calculations were made with Microsoft Excel 2016 programme. The calculation confirmed their conformity with a coefficient of 0.78 . The questionnaire used was developed for this study. The questionnaire was divided into two parts. The first part collected demographic datas, such as age, gender, and tablet/mobile phone usage. The second part consisted of 8 questions assessing general knowledge about oral hygiene (Table ). Each correct answer was scored as 1 point, while incorrect and unanswered questions received 0 points. A total of 490 students who completed the baseline questionnaire were randomly assigned to one of three groups: verbal explanation group, animation group, or peer-led reels group (Fig. ). For randomisation, the completed questionnaires were numbered and participants were assigned to groups by making a table of random numbers in microsoft excel 2016. In the verbal explanation group, a researcher wearing a white coat provided a one-time, 3-minute oral hygiene education session using a jaw model. In the animation group, oral hygiene education was presented to the children once as a cartoon video lasting 1 min and 17 s (via Windows Media Player). The animation video was created by the researcher using the test version of “ www.vyond.com ”. The animation featured a character voiced by one of the researchers, along with informative text and background music. The character conveyed oral hygiene information through both voice narration and text content. In the peer-led reels group, oral hygiene information was presented for 1 min through a video created on the Instagram platform. The video was played once via Windows Media Player. The video depicted a child of a similar age to the target audience receiving oral hygiene information, followed by the child demonstrating the behavior, with background music. The video included some text, but did not feature the child’s voice or information. The education given in all three intervention groups was prepared in a way to match the questionnaire questions one-to-one. All educational methods included the same information about general oral hygiene. Following the education, the children completed the same questionnaire again. The knowledge acquired pre- and post-education was evaluated according to the three intervention groups, gender and tablet/mobile phone use. Data analysis was conducted using the IBM Statistical Package for the Social Sciences (SPSS for Windows, version 26.0, SPSS Inc., Chicago, IL, USA). Descriptive statistics including number and percentage for categorical data and mean and standard deviation for continuous data were calculated. The normality of the data was assessed using the Kolmogorov-Smirnov test. The Wilcoxon test was used to compare the knowledge levels before and after oral hygiene education, while the analysis of covariance (ANCOVA) was used to evaluate the effects of group, gender, and tablet/mobile phone usage on the knowledge level at the end of the oral hygiene education. A p -value of < 0.05 was considered statistically significant in all tests. This study was completed with 464 children, achieving a power (1-beta) of 89%. Initially, 490 children who completed the baseline questionnaire were included in the study. However, due to incomplete participant information in the follow-up questionnaires, a total of 464 children were included in the final analysis (Fig. ). Of the 464 children, 245 (52.80%) were girls and 219 (47.20%) were boys. Their mean age was 11.14 ± 0.49 and with ages ranging from 10.00 to 11.99 years. Among them, 400 (86.20%) reported using a tablet/mobile phone, while 64 (13.80%) reported not using one. In this study, where knowledge scores ranged from 0 to 8, the median knowledge scores in the verbal explanation group were 4 before the training and 7 after the training, with a statistically significant difference ( p < 0.001). In the animation group, it was 5 before the education and 7 after the education, with a statistically significant difference between them ( p < 0.001). In the peer-led reels group, it was 4 before the education and 6 after the education, with a statistically significant difference between them ( p < 0.001) (Table ). Pre-existing knowledge of oral hygiene among studentscould potentially act as a confounding factor. Therefore, after controlling for covariates related to pre-existing knowledge, the post-education knowledge levels were evaluated. The results of the analysis of covariance (ANCOVA) are presented in Table , which shows the main effects and interactions of the independent variables (group, gender, tablet or mobile phone use) on post-education knowledge level, while controlling for pre-education knowledge levels. The main effect of group was statistically significant, affecting post-education knowledge level by 3.8% (partial eta squared = 0.038). However, the main effects of gender and tablet or mobile phone use, as well as their interactions with group, respectively, were not statistically significant ( p = 0.694, p = 0.641). Overall, group, gender, and tablet or mobile phone use explained 28% of the variance in post-education knowledge level. The post-education knowledge levels of in the verbal explanation and animation groups were similar, whereas the knowledge levels of the peer-led reels group were significantly lower compared to the verbal explanation and animation groups ( p < 0.001). The highest post-education knowledge levels were observed in the animation group (6.73) followed by the verbal explanation group (6.57), with the lowest levels were found in the peer-led reels group (5.95) ( p < 0.001) (Table ). In our study, we aimed to combine the benefits of peer influence and the accessibility of educational videos via social media to enhance children’s knowledge on oral hygiene. To achieve this, a peer-led reels video was created. This study, which examines the effectiveness of a peer-led reels video in improving children’s knowledge of oral hygiene compared to traditional education methods involving animated videos and verbal explanations with dental models, is the first of its kind in this field. This study resulted in a model that can explains 28% of the variance in post-education knowledge acquired through different educational methods, gender, and tablet/mobile phone usage. There is no consensus on when children should be given oral hygiene education. Some researchers suggest that this should begin with parental education and parental guidance at birth , while others recommend the school age . In this study, children were directed to questions about oral hygiene and they answered these questions themselves at school. According to the latest issue of the Turkish Oral and Dental Health Research Report, the most common reason for visiting a dentist in our country is due to a dental problem with a rate of 90.4%, and when analysed by age, the age of the first visit to the dentist is most common (22.4%) around the age of 10 years . At the same time, this age-group was selected as their development would enable them to understand cause-and-effect relationships and use logic to answer the questions, which is consistent with previous research . For a long time, lectures delivered by teachers in classrooms have been the most common form of teaching and learning. The main advantage of this method is the direct interaction between the teacher and students, which allows for feedback through eye contact during the lesson . However, students process information in different ways, and as a result, various educational approaches such as verbal, written, visual, and auditory methods are effective in supporting learning. Given the diversity in learning styles, it is well-established that various educational methods can play a role in oral and dental health education programs, and that a single health education approach is unlikely to be suitable for all students . With the advancement of technology and the increasing prevalence of children using technological devices, there is a need to integrate new methods for delivering oral health education . Therefore, a main point of our study, traditional dental model verbal explanations were compared to animation videos and peer-led reels video methods for providing oral hygiene education. In addition, another main point of our study is to observe the effect of this verbal explanations and other educational methods, especially in children who are not exposed to technological devices such as tablet/mobile phone. According to the findings of our study, all three educational methods were effective, with an increase in children’s knowledge levels observed after the interventions. The limited knowledge of oral hygiene in children before the education and the increase in knowledge and awareness with education demonstrate the need for oral and dental health education programs in schools. With the increasing accessibility of technological devices (such as tablets, mobile phones, and computers) and the rise in social media usage among children, changes are occurring in their learning perceptions . Multimedia, which includes video and particularly cartoon animation, is extensively researched as an instructional aid . The colorful characters and animated stories increase children’s focus on education, making the conveyed messages more interesting and entertaining . Additionally, these contents provide a standard level of education and can be repeated in the same format according to the viewers’ needs. Therefore, it is possible to prevent knowledge discrepancies that may arise in education given at different times or through different experts using traditional methods . Peer leaders are also mentioned in the literature as an alternative to experts in transferring knowledge. The effectiveness of this method is supported by social learning theories that propose that sensitive information is more easily shared among peers of similar age . However, there is no consensus in the literature regarding the roles of these methods in oral and dental health education. In studies comparing traditional and animation-based methods, Alhayek et al. state that both methods are applicable, while Sinor , concludes that the animation environment is more effective and sustainable in providing oral health education. In their study, Gavic et al. found that there was no statistically significant difference in the knowledge acquired by children through traditional methods and videos. However, after the education conducted through brochures, they found that children had a lower level of knowledge, but all three educational methods were effective. Yeo et al. who investigated the effect of peer-led videos on oral hygiene, stated that it was effective in improving the overall oral hygiene knowledge of third-grade students. In our study, in addition to traditional methods and animation, peer-led reels videos, which have not been previously examined in the literature, were included. The effect of educational methods on knowledge level after education was observed to be 3.8%, while there was no effect of gender and tablet/mobile phone use on knowledge level. The effect of gender and tablet use on the level of knowledge after education was not observed. This situation is associated with the progress of education in our society, independent of gender characteristics, and the high probability of children being exposed to tablets/mobile phones in today’s conditions, even if they do not own them. In our study, children who received education through animated videos had the highest level of knowledge after the education, while similar results were found in children who received education through traditional methods. Additionally, peer-led education with reels videos resulted in lower knowledge acquisition compared to other methods. This finding provides evidence that the attractiveness of animations to children leads to an increased focus on the information provided by the animation . Furthermore, it is believed that the traditional method of education, which is familiar to children who are accustomed to didactic education provided by teachers in classrooms, contributes to higher learning. Although it has been suggested that peer-led education is equally or more effective than that provided by teachers , our study indicates that the lower knowledge acquisition observed in the peer-led reels video may be attributed to its lack of audio, as it appeals solely to the visual sense, in contrast to other methods that engage both visual and auditory senses. Additionally, this silent video was only played once for the children, which means that there is a possibility that some information may have been conveyed too quickly, without allowing the children to focus or make meaningful connections. Our study is characterized by the evaluation of the effectiveness of three different educational methods, including the first-time application of peer-led reels videos, and the implementation of these methods on students with varying characteristics in different schools, which constitutes the strength of our study. The first limitation of the study is that it was conducted in only one region in Turkey. Secondly, the effect of the voice of the educator in the animation video between genders could not be evaluated. In this study, gender, tablet/mobile phone use and three different educational methods (verbal lecture, animation video and peer-led reels video) were able to clarify 28% of the issue of improving children’s oral hygiene knowledge. There is a need to clarify the 72% part that has not yet been resolved by evaluating personal demographic data and different educational methods that may affect knowledge gains in future studies. In our study, it was observed that learning with audio and visual stimuli was more significant. Based on this, we believe that it would be useful to evaluate the learning curves of children with tools such as virtual reality that appeal to more than one sense. Also as in other health education fields, there is a problem with retaining and applying knowledge in oral hygiene education. Therefore, there is a need for new studies that reach broader audiences nationwide, which follow up on the application of practices in children after gaining knowledge. In oral hygiene education, it has been observed that traditional methods, animations and peer-led reels videos are effective in providing children with relevant information about oral hygiene. With the technological revolution making information more accessible and consumable for the new generation, we believe that animation videos may be more favorable today. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 |
Dual-Domain Primary Succession of Bacteria in Glacier Forefield Streams and Soils of a Maritime and Continental Glacier | 91ad6c8f-62ae-4797-a3c8-af22b3d80f98 | 11829940 | Microbiology[mh] | Glaciers are vital components of the global hydrological cycle, covering ~ 10% of the Earth’s land surface and containing ~ 75% of its freshwater . As a consequence of climate change, glaciers are retreating at an unprecedented rate worldwide, with many predicted to disappear within decades . Retreating glaciers unveil new terrestrial and aquatic landscapes in glacier forefields , creating arenas for ecological succession . These newly exposed habitats, though initially hosting minimal biological activity, are ideal natural laboratories to study the fundamental processes driving biodiversity and ecosystem development. Thus, there have been increasing interests in exploring the primary succession of glacier forefield soils , glacier-fed streams , and glacier-fed lakes . Studying characteristics of primary succession provides critical insights into how life colonizes, and ecosystems evolve, in response to changing environmental conditions. Glacier forefields are notable for their concurrent creation of soil and stream ecosystems, which develop in parallel yet interact intimately. Despite significant research into individual succession pathways in these environments , a comprehensive understanding of their synchronous successions remains elusive. In glacier forefields, new soil originates from basal sediments left behind as the glacier terminus retreats, along with supraglacial sediments in the case of debris-covered glaciers, both of which are colonized by early-successional microorganisms . These initial microorganisms are essential for soil development and biogeochemical cycles, initiating the formation of terrestrial ecosystems . Over time, vegetation begins to colonize, accelerating soil development by secreting organic acids and accumulating organic matter . The progressive retreat of the glacier exposes new substrate over time, establishing the glacier forefield chronosequence (GFC) — a gradient that reflects increasing substrate exposure time with distance from the glacier terminus . This GFC represents a mixture of exposure time and spatial distance, as areas farther from the glacier terminus have generally been exposed for longer periods, though the exposure timeline is not necessarily linear . Along the GFC, soil microbial communities experience profound shifts driven by changes in both biotic and abiotic conditions . Accompanying soil ecosystems, glacier-fed streams are a key ecological and geomorphological feature in glacier forefields, serving as biogeochemical channels and biodiversity hotspots . Glacier-fed streams receive materials transported from upstream glacial and terrestrial ecosystems, linking glacial processes and downstream aquatic ecosystems . Flushing from glaciers, streams share a substantial proportion of microorganism with soil. Retreating glaciers results in the lengthening of glacier-fed streams, which are colonized by aquatic biota over time and characterized by longitudinal alterations in hydrological and physicochemical environments . In glacier-fed streams, benthic biofilms are the main contributors to primary production, integrating biogeochemical cycling through nutrient uptake, transfer, and remineralization . Previous studies have indicated that biodiversity in glacier-fed stream biofilms can either decrease or increase with glacier retreat, depending on factors such as changes in hydrology, nutrient availability, and temperature along the stream gradient . In glacier forefields, soil and stream ecosystems develop concurrently, yet limited research has explored how these systems interact and potentially influence each other’s successional pathways. This study addresses this gap by examining synchronous primary succession processes in these two interconnected domains. Alpine glaciers on Earth can be broadly categorized into continental and maritime glaciers, which differ substantially in many aspects, such as climate patterns, ablation processes, mass balance, and retreat rate ,O’Neel et al., 2014). Continental glaciers, such as those in the Tianshan Mountains of Central Asia, experience a dry and cold climate and are less sensitive to climate change with low mass exchange and low retreat rates (O’Neel et al., 2014; . In contrast, maritime glaciers, such as those in the Maritime Alps (south-western European Alps) and the Hengduan Mountains (Southeastern Tibetan Plateau), exist in wet and warm climates and are highly sensitive to climate change with higher mass exchange and faster retreat rates . Numerous studies have investigated the primary succession of terrestrial and aquatic ecosystems in glacier forefields . In addition, different succession patterns have been found in soil along GFC between continental and maritime glaciers . While primary succession is a well-studied phenomenon, its synchronous manifestation in adjacent terrestrial and aquatic habitats, especially under the influence of different glacier types, remains underexplored. In this study, we introduced the concept of “Dual-Domain Primary Succession,” which refers to the synchronous yet distinct development of microbial communities in both soil and stream ecosystems within glacier forefields. This concept is grounded in ecological succession theories that consider how adjacent environments, connected by nutrient and microbial exchanges, might exhibit both independent and interlinked successional trajectories. By focusing on microbial communities, we aim to examine whether shared initial colonizers and subsequent shifts align across soil and stream domains or diverge based on unique habitat conditions. To support this concept, this study examines bacterial communities in glacier forefield soils and associated glacier-fed streams from a typical continental glacier in the Tianshan Mountains of Central Asia and a maritime glacier in the Hengduan Mountains on the southeastern edge of the Qinghai-Tibet Plateau. Expected results for “Dual-Domain Primary Succession” would include similarities in initial microbial communities due to shared colonization sources, followed by habitat-specific differentiation influenced by distinct environmental pressures, leading to markedly different nature and pace of the succession between soil and stream environments as well as between maritime and continental glacier. Study Area The continental glacier studied is Urumqi Glacier No.1 (43°06′N, 86°49′E), with its terminus at an elevation of 3796 m. It is situated in the eastern Tianshan Mountain Range in Central Asia (Fig. ). The maritime glacier studied is Hailuogou Glacier (29°34′N, 101°59′E), located on Gongga Mountain, the highest peak in the Hengduan Mountain Range, with its terminus at an elevation of 2942 m (Fig. ). The Tianshan Mountain Range in China contains 7934 glaciers, covering a total area of 7179 km 2 . This accounts for 16.3% of the total number and 13.9% of the total glacier area in China . This glacier region, predominantly influenced by westerly circulation, experiences a temperate continental climate characterized by low precipitation, mostly occurring during the summer months . Accelerated global warming has led to rapid shrinkage of these glaciers . The Urumqi Glacier No.1, for instance, has retreated from an area of 1.95 km 2 in 1962 to 1.52 km 2 in 2018 , with predictions indicating it may nearly disappear by 2100 . The mean annual precipitation of Urumqi Glacier No.1 is 475 mm and the mean annual air temperature is − 4.8 °C . The Second Glacier Inventory of China identified 8607 maritime glaciers, covering an area of 13,203 km 2 , primarily located in the southern and eastern Qinghai-Tibet Plateau. These glaciers represent 18.6% of the total number and 22.2% of the total glacier area in China. The maritime glaciers on Gongga Mountain are primarily influenced by the southwest and southeast monsoons. These glaciers have been retreating at an average annual rate of 1.04 km 2 per year from 1994 to 2021 . Since the 1930s, the Hailuogou Glacier has retreated by 2 km. The Hailuogou Glacier receives an average annual precipitation of 1947 mm and experiences a mean annual air temperature of 4.2 °C . Unlike continental glaciers, the termini of maritime glaciers, such as Hailuogou Glacier, often extend into forested areas. Field Sampling and Chemical Analyses The sampling activities took place in July and August 2021, focusing on the forefield soils (SO) and glacier-fed streams (ST) of Urumqi Glacier No.1 and Hailuogou Glacier. In each glacier, paired soil and stream samples were collected along the glacier forefield chronosequence (GFC), starting from the glacier terminus (10 m away from the glacier) and extending to areas with well-developed vegetation of climax community (2100 m and 830 m away from the glacier for Urumqi Glacier No.1 and Hailuogou Glacier, respectively). The space-for-time substitution is used to study ecological processes that occur slowly by examining the relationships between ecological variables at sites that are assumed to be at different stages of development . A total of 7 paired soil and stream samples were collected from Urumqi Glacier No.1, and 5 pairs from Hailuogou Glacier. While this scope provided an initial exploration of dual-domain succession, we acknowledge that more comprehensive studies involving multiple sampling campaigns and additional glacier sites are needed to fully validate and generalize the findings. At each stream sampling site, 6 to 9 submerged rocks were randomly sampled across the stream . The benthic biofilms were then thoroughly removed by scrubbing a 4.5 cm diameter area on the upper surface of each rock using a sterilized nylon brush. The slurry on the rock and brush was rinsed using sterile water and collected in an acid-cleaned polyethylene bottle to a volume of 500 mL. From the mixed slurry, 100 mL of the mixed slurry was filtered onto a 0.2-µm membrane filter (polycarbonate, Whatman, UK) in triplicate, which were combined and frozen in dry-ice immediately in the field and used for DNA extraction and sequencing in the laboratory. Additionally, water conductivity (Cond) and pH were measured in situ using a handheld multiparameter instrument (YSI ProPlus, Yellow Springs, Ohio), and 500 mL water was collected in acid-cleaned polyethylene bottles in triplicate for further chemical analyses in the laboratory. The chemical analyses, including total nitrogen (TN), nitrate (NO 3 − ), ammonium (NH 4 + ), total phosphorus (TP), soluble reactive phosphorus (SRP), and dissolved organic carbon (DOC), were conducted according to our previous studies . TN and TP samples were firstly persulfate oxidized, TN was measured using the ion chromatography method (EPA 300.0), and TP was measured using the ascorbic acid colorimetric method (EPA 365.3). Water samples for NO 3 − , NH 4 + , SRP, and DOC were first filtered through glass fiber filters (GF/F, Whatman). After filtration, NO 3 − was measured by ion chromatography (EPA 300.0). NH 4 + was determined using the indophenol colorimetric method (EPA 350.1). SRP was quantified with the ascorbic acid colorimetric method (EPA 365.3). DOC was measured using a TOC-Analyzer (TOC-VCPH, Columbia, Maryland). The variations of these stream physicochemical variables along the increasing distance to glacier are shown in Fig. . In glacier forefield, soils are originated from the glacier basal sediments, surface debris, and in situ bedrock pedogenesis . Soil samples were collected from three transects which were perpendicular to the stream bank close to the stream sampling site (5 m from the stream shore). The sampled soils were not subject to regular flooding from the adjacent stream. Topsoil samples (0–10 cm depth) were collected using a soil auger with a 10 cm inner diameter along each transect at three evenly spaced points (1 m apart), chosen to capture the typical variation in soil properties near the stream. The auger was thoroughly cleaned between each sampling point using deionized water and ethanol to prevent contamination. The soil from each site was combined into a composite sample. The use of composite samples was intended to capture the general microbial community characteristics at each site while balancing practical fieldwork constraints, such as time, access, and preservation challenges in these remote and extreme environments. Microbial samples were placed in 45-mL sterile centrifuge tubes and immediately kept frozen on dry ice to preserve their integrity. The remaining soil was placed in sterile bags and transported in a cooler to the field laboratory for further processing. In the laboratory, the soil samples were air-dried at room temperature (approximately 25 °C) for 5 days and then passed through a 2-mm nylon sieve to remove visible roots, residues, and stones. Soil organic carbon (SOC), TN, TP, NO 3 − , NH 4 + , SRP, pH, and conductivity were analyzed . SOC was determined using the potassium dichromate oxidation spectrophotometric method (HJ615-2011). TN was assessed using the modified Kjeldahl method (HJ717-2014). TP was measured after microwave extraction with nitric acid, using the ascorbic acid colorimetric method. NO 3 − and NH 4 + were measured using a spectrophotometer after the extraction with 2 M potassium chloride (HJ634-2012). SRP was determined using the ascorbic acid colorimetric method after the extraction with 0.5 M sodium bicarbonate (HJ704-2014). pH was measured using a pH meter with a 1:2.5 soil-to-distilled water ratio. Conductivity was measured using a conductivity meter with a 1:5 soil-to-distilled water ratio. The variations of these soil physicochemical variables along the increasing distance to the glacier were shown in Fig. . DNA Extraction, PCR, and Sequencing Following the manufacturer’s protocols, DNA was extracted from filters (stream samples, n = 12) and soils ( n = 12) using the Magen Hipure Soil DNA Kit (Magen, China). The extracted DNA was purified and quantified, and 20 ng of template DNA was used to generate amplicons. The V3-V4 hypervariable regions of prokaryotic 16S rRNA were amplified using the forward primer 343F (5′-TACGGRAGGCAGCAG-3′) and the reverse primer 798R (5′-AGGGTATCTAATCCT-3′) . PCR amplifications were performed in triplicate for each sample to reduce amplification bias. PCRs were carried out in a 25 µL reaction mixture comprising 2.5 µL of TransStart buffer, 2 µL of dNTPs, 1 µL of each primer, 0.5 µL of TransStart Taq DNA polymerase, and 20 ng of template DNA. The thermal cycling was performed on an ABI GeneAmp® 9700 (USA) with the following program: an initial denaturation at 94 °C for 5 min; 24 cycles of denaturation at 94 °C for 30 s, annealing at 56 °C for 30 s, and extension at 72 °C for 20 s; followed by a final extension at 72 °C for 5 min. One soil sample failed to amplify during the PCR process. Following PCR, the triplicate products were pooled to create a DNA library, which was then purified and quantified with a Qubit 4 Fluorometer (Thermo Fisher Scientific, Waltham, USA). This resulted in a total of 23 amplicon libraries. These DNA libraries were multiplexed and sequenced on an Illumina MiSeq platform (Illumina, San Diego, CA, USA) according to the manufacturer’s instructions. Double-end sequencing was performed on both positive and negative reads. Paired-end reads were initially processed using Trimmomatic software to identify and remove ambiguous bases (N) and trim sequences with an average quality score below 20, employing a sliding window approach for quality trimming. After this step, paired-end reads were merged using FLASH software with parameters set for a minimum overlap of 10 bp, a maximum overlap of 200 bp, and a maximum mismatch rate of 20%. Further denoising involved excluding reads with ambiguous or homologous sequences and those shorter than 200 bp. Only reads with at least 75% of bases scoring above Q20 were retained. The resulting high-quality sequences were used for operational taxonomic units (OTUs) clustering, which was conducted using VSEARCH 2.4.3 with a sequence similarity threshold set at 97% based on the SILVA 138 database . Singleton OTUs were excluded. We rarefied the data to standardize sequencing depth (21,774 sequences) across samples (Fig. ), which can help reduce potential biases caused by uneven sequencing effort and improve comparability between samples. This step was taken after confirming significant variation in sequencing depth across samples. The raw sequences have been uploaded to the China National Center for Bioinformation. Analyses The differences in α-diversity and β-diversity of bacterial communities across different sample categories (soil and stream samples collected from the studied maritime and continental glacier) were evaluated using the Wilcoxon rank-sum test. Regression analyses were performed between α-diversity and the distance to the glacier to elucidate the successional pattern of α-diversity. The best regression model was selected based on the corrected Akaike Information Criterion (AICc). To illustrate bacterial community differences across sample categories, non-metric multidimensional scaling (NMDS) ordinations were conducted based on Bray–Curtis dissimilarities using the relative abundance of OTUs. The differences between sample categories were further assessed by ADONIS using “vegan 2.5–7” package . To uncover the processes shaping bacterial communities, β-diversity (β sor , Sorensen dissimilarity) can be partitioned into two components: turnover (β turn ) and nestedness (β nest ) . This partitioning was performed using the “betapart 1.5.4” package . To assess the relationships between α-diversity and β-diversity with environmental variables, we conducted Mantel tests using the “linkET 0.0.7.4” package. To control for type I errors of mantel tests, adjustments for multiple comparisons were applied using the false discovery rate (FDR) method. The bacterial co-occurrence network for each sample category was constructed based on the Pearson correlation between pairs of OTUs (only the OTUs with an average relative abundance of ≥ 0.1% and occurring in more than 80% of the samples). The P -values were adjusted using the FDR method . Only the correlations with Pearson’s R > 0.8 or R < − 0.8 and P < 0.01 were considered for network construction. The topological features of the network were calculated using igraph 1.3.5 . The network visualization was performed in Gephi 0.9.7 . All the statistical analyses were carried out in R 4.1.2 . The continental glacier studied is Urumqi Glacier No.1 (43°06′N, 86°49′E), with its terminus at an elevation of 3796 m. It is situated in the eastern Tianshan Mountain Range in Central Asia (Fig. ). The maritime glacier studied is Hailuogou Glacier (29°34′N, 101°59′E), located on Gongga Mountain, the highest peak in the Hengduan Mountain Range, with its terminus at an elevation of 2942 m (Fig. ). The Tianshan Mountain Range in China contains 7934 glaciers, covering a total area of 7179 km 2 . This accounts for 16.3% of the total number and 13.9% of the total glacier area in China . This glacier region, predominantly influenced by westerly circulation, experiences a temperate continental climate characterized by low precipitation, mostly occurring during the summer months . Accelerated global warming has led to rapid shrinkage of these glaciers . The Urumqi Glacier No.1, for instance, has retreated from an area of 1.95 km 2 in 1962 to 1.52 km 2 in 2018 , with predictions indicating it may nearly disappear by 2100 . The mean annual precipitation of Urumqi Glacier No.1 is 475 mm and the mean annual air temperature is − 4.8 °C . The Second Glacier Inventory of China identified 8607 maritime glaciers, covering an area of 13,203 km 2 , primarily located in the southern and eastern Qinghai-Tibet Plateau. These glaciers represent 18.6% of the total number and 22.2% of the total glacier area in China. The maritime glaciers on Gongga Mountain are primarily influenced by the southwest and southeast monsoons. These glaciers have been retreating at an average annual rate of 1.04 km 2 per year from 1994 to 2021 . Since the 1930s, the Hailuogou Glacier has retreated by 2 km. The Hailuogou Glacier receives an average annual precipitation of 1947 mm and experiences a mean annual air temperature of 4.2 °C . Unlike continental glaciers, the termini of maritime glaciers, such as Hailuogou Glacier, often extend into forested areas. The sampling activities took place in July and August 2021, focusing on the forefield soils (SO) and glacier-fed streams (ST) of Urumqi Glacier No.1 and Hailuogou Glacier. In each glacier, paired soil and stream samples were collected along the glacier forefield chronosequence (GFC), starting from the glacier terminus (10 m away from the glacier) and extending to areas with well-developed vegetation of climax community (2100 m and 830 m away from the glacier for Urumqi Glacier No.1 and Hailuogou Glacier, respectively). The space-for-time substitution is used to study ecological processes that occur slowly by examining the relationships between ecological variables at sites that are assumed to be at different stages of development . A total of 7 paired soil and stream samples were collected from Urumqi Glacier No.1, and 5 pairs from Hailuogou Glacier. While this scope provided an initial exploration of dual-domain succession, we acknowledge that more comprehensive studies involving multiple sampling campaigns and additional glacier sites are needed to fully validate and generalize the findings. At each stream sampling site, 6 to 9 submerged rocks were randomly sampled across the stream . The benthic biofilms were then thoroughly removed by scrubbing a 4.5 cm diameter area on the upper surface of each rock using a sterilized nylon brush. The slurry on the rock and brush was rinsed using sterile water and collected in an acid-cleaned polyethylene bottle to a volume of 500 mL. From the mixed slurry, 100 mL of the mixed slurry was filtered onto a 0.2-µm membrane filter (polycarbonate, Whatman, UK) in triplicate, which were combined and frozen in dry-ice immediately in the field and used for DNA extraction and sequencing in the laboratory. Additionally, water conductivity (Cond) and pH were measured in situ using a handheld multiparameter instrument (YSI ProPlus, Yellow Springs, Ohio), and 500 mL water was collected in acid-cleaned polyethylene bottles in triplicate for further chemical analyses in the laboratory. The chemical analyses, including total nitrogen (TN), nitrate (NO 3 − ), ammonium (NH 4 + ), total phosphorus (TP), soluble reactive phosphorus (SRP), and dissolved organic carbon (DOC), were conducted according to our previous studies . TN and TP samples were firstly persulfate oxidized, TN was measured using the ion chromatography method (EPA 300.0), and TP was measured using the ascorbic acid colorimetric method (EPA 365.3). Water samples for NO 3 − , NH 4 + , SRP, and DOC were first filtered through glass fiber filters (GF/F, Whatman). After filtration, NO 3 − was measured by ion chromatography (EPA 300.0). NH 4 + was determined using the indophenol colorimetric method (EPA 350.1). SRP was quantified with the ascorbic acid colorimetric method (EPA 365.3). DOC was measured using a TOC-Analyzer (TOC-VCPH, Columbia, Maryland). The variations of these stream physicochemical variables along the increasing distance to glacier are shown in Fig. . In glacier forefield, soils are originated from the glacier basal sediments, surface debris, and in situ bedrock pedogenesis . Soil samples were collected from three transects which were perpendicular to the stream bank close to the stream sampling site (5 m from the stream shore). The sampled soils were not subject to regular flooding from the adjacent stream. Topsoil samples (0–10 cm depth) were collected using a soil auger with a 10 cm inner diameter along each transect at three evenly spaced points (1 m apart), chosen to capture the typical variation in soil properties near the stream. The auger was thoroughly cleaned between each sampling point using deionized water and ethanol to prevent contamination. The soil from each site was combined into a composite sample. The use of composite samples was intended to capture the general microbial community characteristics at each site while balancing practical fieldwork constraints, such as time, access, and preservation challenges in these remote and extreme environments. Microbial samples were placed in 45-mL sterile centrifuge tubes and immediately kept frozen on dry ice to preserve their integrity. The remaining soil was placed in sterile bags and transported in a cooler to the field laboratory for further processing. In the laboratory, the soil samples were air-dried at room temperature (approximately 25 °C) for 5 days and then passed through a 2-mm nylon sieve to remove visible roots, residues, and stones. Soil organic carbon (SOC), TN, TP, NO 3 − , NH 4 + , SRP, pH, and conductivity were analyzed . SOC was determined using the potassium dichromate oxidation spectrophotometric method (HJ615-2011). TN was assessed using the modified Kjeldahl method (HJ717-2014). TP was measured after microwave extraction with nitric acid, using the ascorbic acid colorimetric method. NO 3 − and NH 4 + were measured using a spectrophotometer after the extraction with 2 M potassium chloride (HJ634-2012). SRP was determined using the ascorbic acid colorimetric method after the extraction with 0.5 M sodium bicarbonate (HJ704-2014). pH was measured using a pH meter with a 1:2.5 soil-to-distilled water ratio. Conductivity was measured using a conductivity meter with a 1:5 soil-to-distilled water ratio. The variations of these soil physicochemical variables along the increasing distance to the glacier were shown in Fig. . Following the manufacturer’s protocols, DNA was extracted from filters (stream samples, n = 12) and soils ( n = 12) using the Magen Hipure Soil DNA Kit (Magen, China). The extracted DNA was purified and quantified, and 20 ng of template DNA was used to generate amplicons. The V3-V4 hypervariable regions of prokaryotic 16S rRNA were amplified using the forward primer 343F (5′-TACGGRAGGCAGCAG-3′) and the reverse primer 798R (5′-AGGGTATCTAATCCT-3′) . PCR amplifications were performed in triplicate for each sample to reduce amplification bias. PCRs were carried out in a 25 µL reaction mixture comprising 2.5 µL of TransStart buffer, 2 µL of dNTPs, 1 µL of each primer, 0.5 µL of TransStart Taq DNA polymerase, and 20 ng of template DNA. The thermal cycling was performed on an ABI GeneAmp® 9700 (USA) with the following program: an initial denaturation at 94 °C for 5 min; 24 cycles of denaturation at 94 °C for 30 s, annealing at 56 °C for 30 s, and extension at 72 °C for 20 s; followed by a final extension at 72 °C for 5 min. One soil sample failed to amplify during the PCR process. Following PCR, the triplicate products were pooled to create a DNA library, which was then purified and quantified with a Qubit 4 Fluorometer (Thermo Fisher Scientific, Waltham, USA). This resulted in a total of 23 amplicon libraries. These DNA libraries were multiplexed and sequenced on an Illumina MiSeq platform (Illumina, San Diego, CA, USA) according to the manufacturer’s instructions. Double-end sequencing was performed on both positive and negative reads. Paired-end reads were initially processed using Trimmomatic software to identify and remove ambiguous bases (N) and trim sequences with an average quality score below 20, employing a sliding window approach for quality trimming. After this step, paired-end reads were merged using FLASH software with parameters set for a minimum overlap of 10 bp, a maximum overlap of 200 bp, and a maximum mismatch rate of 20%. Further denoising involved excluding reads with ambiguous or homologous sequences and those shorter than 200 bp. Only reads with at least 75% of bases scoring above Q20 were retained. The resulting high-quality sequences were used for operational taxonomic units (OTUs) clustering, which was conducted using VSEARCH 2.4.3 with a sequence similarity threshold set at 97% based on the SILVA 138 database . Singleton OTUs were excluded. We rarefied the data to standardize sequencing depth (21,774 sequences) across samples (Fig. ), which can help reduce potential biases caused by uneven sequencing effort and improve comparability between samples. This step was taken after confirming significant variation in sequencing depth across samples. The raw sequences have been uploaded to the China National Center for Bioinformation. The differences in α-diversity and β-diversity of bacterial communities across different sample categories (soil and stream samples collected from the studied maritime and continental glacier) were evaluated using the Wilcoxon rank-sum test. Regression analyses were performed between α-diversity and the distance to the glacier to elucidate the successional pattern of α-diversity. The best regression model was selected based on the corrected Akaike Information Criterion (AICc). To illustrate bacterial community differences across sample categories, non-metric multidimensional scaling (NMDS) ordinations were conducted based on Bray–Curtis dissimilarities using the relative abundance of OTUs. The differences between sample categories were further assessed by ADONIS using “vegan 2.5–7” package . To uncover the processes shaping bacterial communities, β-diversity (β sor , Sorensen dissimilarity) can be partitioned into two components: turnover (β turn ) and nestedness (β nest ) . This partitioning was performed using the “betapart 1.5.4” package . To assess the relationships between α-diversity and β-diversity with environmental variables, we conducted Mantel tests using the “linkET 0.0.7.4” package. To control for type I errors of mantel tests, adjustments for multiple comparisons were applied using the false discovery rate (FDR) method. The bacterial co-occurrence network for each sample category was constructed based on the Pearson correlation between pairs of OTUs (only the OTUs with an average relative abundance of ≥ 0.1% and occurring in more than 80% of the samples). The P -values were adjusted using the FDR method . Only the correlations with Pearson’s R > 0.8 or R < − 0.8 and P < 0.01 were considered for network construction. The topological features of the network were calculated using igraph 1.3.5 . The network visualization was performed in Gephi 0.9.7 . All the statistical analyses were carried out in R 4.1.2 . Alpha-Diversity and Community Composition After quality filtering, a total of 1192 OTUs were clustered. In general, bacterial communities in forefield soils had a higher α-diversity compared to those in adjacent glacier-fed streams (Fig. a). Additionally, bacterial communities in both forefield soils and glacier-fed stream of the Hailuogou Glacier had a higher α-diversity than those of the Urumqi Glacier No.1 (Fig. a). Along GFC (measured as the distance from the glacier terminus), regression analysis revealed that OTU richness significantly decreased with increasing distance from the glacier in both soil and stream of the Hailuogou Glacier, with the rate of decrease being faster in stream than in soils (Fig. b). Moreover, the number of shared OTUs between paired soil and stream samples also significantly decreased (Fig. d). For the Urumqi Glacier No.1, however, OTU richness significantly increased in stream and displayed a unimodal distribution pattern in soils along GFC (Fig. c). The number of shared OTUs between paired soil and stream samples in the Urumqi Glacier No.1 did not exhibit a clear pattern along GFC (Fig. d). Considering the measured environmental variables, the mantel test showed that the bacterial α-diversity of Hailuogou Glacier soil significantly correlated with SRP, and that of Urumqi Glacier No.1 stream significantly correlated with pH, TN, and NO 3 − (mantel’s p < 0.05, Fig. ). However, bacterial α-diversity of Hailuogou Glacier stream and Urumqi Glacier No.1 soil did not show significant relationships with those environmental variables (mantel’s p > 0.05, Fig. ). Bacterial communities exhibited different dominant phyla between forefield soils and glacier-fed streams, as well as between different glaciers (Fig. ). For the Hailuogou Glacier, bacterial communities were dominated by γ-Proteobacteria (average relative abundance of 27.6%), α-Proteobacteria (24.8%), Acidobacteria (13.7%), Bacteroidetes (11.8%), and Actinobacteria (9.1%) in the forefield soils, while dominated by γ-Proteobacteria (32%), α-Proteobacteria (23.1%), Bacteroidetes (20.9%), and Cyanobacteria (13.8%) in the glacier-fed stream. For the Urumqi Glacier No.1, bacterial communities were dominated by Acidobacteria (20.6%), α-Proteobacteria (15.6%), γ-Proteobacteria (14.1%), Cyanobacteria (13.8%), Bacteroidetes (13.5%), Actinobacteria (6.1%), and Gemmatimonadetes (5.1%) in the forefield soil, while dominated by γ-Proteobacteria (31.6%), Bacteroidetes (21.8%), α-Proteobacteria (17.2%), Cyanobacteria (13.9%), and Deinococcus-Thermus (6.3%) in the glacier-fed stream. Along GFC, the relative abundance of these dominant phyla rarely shown significant patterns (Fig. ). In particular, Actinobacteria decreased while Bacteroidetes increased along GFC in the glacier-fed stream of Hailuogou Glacier. Acidobacteria increased while Bacteroidetes decreased along GFC in the forefield soil of Urumqi Glacier No.1. Gemmatimonadetes and α-Proteobacteria increased, while γ-Proteobacteria decreased along GFC in the glacier-fed stream of Urumqi Glacier No.1. However, in different stages of the succession (along the gradient of increasing distance to glacier), bacterial communities were dominated by different OTUs in forefield soil and glacier-fed streams, as well as in different glaciers (Hailuogou Glacier vs. Urumqi Glacier No.1) (Fig. ). Our findings revealed significant variation in microbial diversity and composition between soils and streams, as well as between the two glacier sites. The high number of shared OTUs between soil and stream samples suggests potential microbial transfer and shared colonization sources, supporting part of the “Dual-Domain” concept. However, observed differences in community structure highlight how environmental factors, such as nutrient availability and pH, drive divergence between these domains. Beta-Diversity Non-metric multidimensional scaling (NMDS) analyses revealed significant differences in bacterial communities across different ecosystems, a finding further confirmed by ADONIS results (Fig. a). Bacterial communities exhibited a significantly higher β-diversity in glacier-fed stream compared to forefield soils for the Hailuogou Glacier (Fig. b), suggesting greater taxonomic heterogeneity in stream bacterial communities for the Hailuogou Glacier. However, no significant differences in β-diversity were found between soil and stream bacterial communities for the Urumqi Glacier No.1 (Fig. b). In addition, β-diversity between paired soil and stream samples increased with the distance from the glacier for the Hailuogou Glacier, suggesting community divergence (Fig. d). Considering the variation of environmental variables, mantel test showed that bacterial β-diversity of Urumqi Glacier No.1 soil correlated with most of the measured environmental variables, and that of Hailuogou Glacier soil only correlated with SRP and C:N ratio (mantel’s p < 0.05, Fig. ). In addition, bacterial β-diversity of glacier-fed stream for both glaciers did not show significant relationships with any of those environmental variables (mantel’s p > 0.05, Fig. ). According to β-diversity partitioning, the variations in bacterial communities in forefield soils and glacier-fed streams of Hailuogou Glacier and Urumqi Glacier No.1 were composited differently by turnover (β turn ) and nestedness (β nest ). For the Hailuogou Glacier, bacterial communities exhibited higher β turn but lower β nest in forefield soils than in the glacier-fed stream (Fig. c). For the Urumqi Glacier No.1, however, there were no significant differences between soil and stream bacterial communities in terms of β turn and β nest (Fig. c). Moreover, when comparing β turn and β nest , β turn was higher than β nest in soil bacterial communities, while β nest was higher than β turn in stream bacterial communities for the Hailuogou Glacier (Fig. c). However, for the Urumqi Glacier No.1, β turn was higher than β nest in both soil and stream bacterial communities (Fig. c). Co-occurrence Network Bacterial communities in the forefield soil and glacier-fed stream in Hailuogou Glacier and Urumqi Glacier No.1 formed distinct co-occurrence networks (Figs. and ). According to topological features of the network-level (Table ), the soil and stream bacterial networks of the Hailuogou Glacier were more complex than that of the Urumqi Glacier No.1. Moreover, the bacterial network of stream biofilm was more complex than the soil one for the Hailuogou Glacier, while it was opposite for the Urumqi Glacier No.1. Moreover, these networks had module structures, with most major modules (the module composed by more than 5% of the nodes) in each network were composed by OTUs highly enriched in a certain site (Figs. and ). For example, in the soil bacterial network of the Hailuogou Glacier (Figs. a and a), modules M3, M2, M6, M7, and M1 were composed of OTUs which were enriched in different soil samples from SO1 to SO5 along the GFC, respectively. In the soil bacterial network of the Urumqi Glacier No.1 (Figs. c and c), modules M4, M1, M5, M2, M7, M6, and M3 were composed of OTUs enriched in different soil samples from SO1 to SO7 along the GFC, respectively. Similar patterns were also found in stream bacterial networks in Hailuogou Glacier and Urumqi Glacier No.1 (Figs. d and b). These results suggested that the succession of bacterial communities in forefield soils and glacier-fed streams was clearly reflected on the co-occurrence network modules. After quality filtering, a total of 1192 OTUs were clustered. In general, bacterial communities in forefield soils had a higher α-diversity compared to those in adjacent glacier-fed streams (Fig. a). Additionally, bacterial communities in both forefield soils and glacier-fed stream of the Hailuogou Glacier had a higher α-diversity than those of the Urumqi Glacier No.1 (Fig. a). Along GFC (measured as the distance from the glacier terminus), regression analysis revealed that OTU richness significantly decreased with increasing distance from the glacier in both soil and stream of the Hailuogou Glacier, with the rate of decrease being faster in stream than in soils (Fig. b). Moreover, the number of shared OTUs between paired soil and stream samples also significantly decreased (Fig. d). For the Urumqi Glacier No.1, however, OTU richness significantly increased in stream and displayed a unimodal distribution pattern in soils along GFC (Fig. c). The number of shared OTUs between paired soil and stream samples in the Urumqi Glacier No.1 did not exhibit a clear pattern along GFC (Fig. d). Considering the measured environmental variables, the mantel test showed that the bacterial α-diversity of Hailuogou Glacier soil significantly correlated with SRP, and that of Urumqi Glacier No.1 stream significantly correlated with pH, TN, and NO 3 − (mantel’s p < 0.05, Fig. ). However, bacterial α-diversity of Hailuogou Glacier stream and Urumqi Glacier No.1 soil did not show significant relationships with those environmental variables (mantel’s p > 0.05, Fig. ). Bacterial communities exhibited different dominant phyla between forefield soils and glacier-fed streams, as well as between different glaciers (Fig. ). For the Hailuogou Glacier, bacterial communities were dominated by γ-Proteobacteria (average relative abundance of 27.6%), α-Proteobacteria (24.8%), Acidobacteria (13.7%), Bacteroidetes (11.8%), and Actinobacteria (9.1%) in the forefield soils, while dominated by γ-Proteobacteria (32%), α-Proteobacteria (23.1%), Bacteroidetes (20.9%), and Cyanobacteria (13.8%) in the glacier-fed stream. For the Urumqi Glacier No.1, bacterial communities were dominated by Acidobacteria (20.6%), α-Proteobacteria (15.6%), γ-Proteobacteria (14.1%), Cyanobacteria (13.8%), Bacteroidetes (13.5%), Actinobacteria (6.1%), and Gemmatimonadetes (5.1%) in the forefield soil, while dominated by γ-Proteobacteria (31.6%), Bacteroidetes (21.8%), α-Proteobacteria (17.2%), Cyanobacteria (13.9%), and Deinococcus-Thermus (6.3%) in the glacier-fed stream. Along GFC, the relative abundance of these dominant phyla rarely shown significant patterns (Fig. ). In particular, Actinobacteria decreased while Bacteroidetes increased along GFC in the glacier-fed stream of Hailuogou Glacier. Acidobacteria increased while Bacteroidetes decreased along GFC in the forefield soil of Urumqi Glacier No.1. Gemmatimonadetes and α-Proteobacteria increased, while γ-Proteobacteria decreased along GFC in the glacier-fed stream of Urumqi Glacier No.1. However, in different stages of the succession (along the gradient of increasing distance to glacier), bacterial communities were dominated by different OTUs in forefield soil and glacier-fed streams, as well as in different glaciers (Hailuogou Glacier vs. Urumqi Glacier No.1) (Fig. ). Our findings revealed significant variation in microbial diversity and composition between soils and streams, as well as between the two glacier sites. The high number of shared OTUs between soil and stream samples suggests potential microbial transfer and shared colonization sources, supporting part of the “Dual-Domain” concept. However, observed differences in community structure highlight how environmental factors, such as nutrient availability and pH, drive divergence between these domains. Non-metric multidimensional scaling (NMDS) analyses revealed significant differences in bacterial communities across different ecosystems, a finding further confirmed by ADONIS results (Fig. a). Bacterial communities exhibited a significantly higher β-diversity in glacier-fed stream compared to forefield soils for the Hailuogou Glacier (Fig. b), suggesting greater taxonomic heterogeneity in stream bacterial communities for the Hailuogou Glacier. However, no significant differences in β-diversity were found between soil and stream bacterial communities for the Urumqi Glacier No.1 (Fig. b). In addition, β-diversity between paired soil and stream samples increased with the distance from the glacier for the Hailuogou Glacier, suggesting community divergence (Fig. d). Considering the variation of environmental variables, mantel test showed that bacterial β-diversity of Urumqi Glacier No.1 soil correlated with most of the measured environmental variables, and that of Hailuogou Glacier soil only correlated with SRP and C:N ratio (mantel’s p < 0.05, Fig. ). In addition, bacterial β-diversity of glacier-fed stream for both glaciers did not show significant relationships with any of those environmental variables (mantel’s p > 0.05, Fig. ). According to β-diversity partitioning, the variations in bacterial communities in forefield soils and glacier-fed streams of Hailuogou Glacier and Urumqi Glacier No.1 were composited differently by turnover (β turn ) and nestedness (β nest ). For the Hailuogou Glacier, bacterial communities exhibited higher β turn but lower β nest in forefield soils than in the glacier-fed stream (Fig. c). For the Urumqi Glacier No.1, however, there were no significant differences between soil and stream bacterial communities in terms of β turn and β nest (Fig. c). Moreover, when comparing β turn and β nest , β turn was higher than β nest in soil bacterial communities, while β nest was higher than β turn in stream bacterial communities for the Hailuogou Glacier (Fig. c). However, for the Urumqi Glacier No.1, β turn was higher than β nest in both soil and stream bacterial communities (Fig. c). Bacterial communities in the forefield soil and glacier-fed stream in Hailuogou Glacier and Urumqi Glacier No.1 formed distinct co-occurrence networks (Figs. and ). According to topological features of the network-level (Table ), the soil and stream bacterial networks of the Hailuogou Glacier were more complex than that of the Urumqi Glacier No.1. Moreover, the bacterial network of stream biofilm was more complex than the soil one for the Hailuogou Glacier, while it was opposite for the Urumqi Glacier No.1. Moreover, these networks had module structures, with most major modules (the module composed by more than 5% of the nodes) in each network were composed by OTUs highly enriched in a certain site (Figs. and ). For example, in the soil bacterial network of the Hailuogou Glacier (Figs. a and a), modules M3, M2, M6, M7, and M1 were composed of OTUs which were enriched in different soil samples from SO1 to SO5 along the GFC, respectively. In the soil bacterial network of the Urumqi Glacier No.1 (Figs. c and c), modules M4, M1, M5, M2, M7, M6, and M3 were composed of OTUs enriched in different soil samples from SO1 to SO7 along the GFC, respectively. Similar patterns were also found in stream bacterial networks in Hailuogou Glacier and Urumqi Glacier No.1 (Figs. d and b). These results suggested that the succession of bacterial communities in forefield soils and glacier-fed streams was clearly reflected on the co-occurrence network modules. Glacier forefield soils and streams present a unique opportunity to study ecological succession in response to glacier retreat. Our study in glacier forefields, focusing on the new concept “Dual-Domain Primary Succession,” revealed significant insights into the synchronous yet distinct trajectories of soil and stream ecosystems. Glacier Forefield Soils In glacier forefield soils, the Hailuogou Glacier exhibited a significantly higher bacterial α-diversity compared to the Urumqi Glacier No.1, with a decreasing pattern along the glacier forefield chronosequence (GFC) for the Hailuogou Glacier and a unimodal distribution for the Urumqi Glacier No.1. These differences suggest that bacterial communities follow distinct successional trajectories, influenced by local environmental conditions. While pioneer microbial communities in newly deglaciated soils originate from various sources, including supraglacial and subglacial habitats and atmospheric transport , the diversity of these communities can be shaped by factors such as the glacier’s climate regime and catchment position . The Hailuogou Glacier, situated at a lower elevation and influenced by monsoon winds, receives microbial inputs enriched by diverse surrounding ecosystems . In contrast, the Urumqi Glacier No.1, located at a higher elevation and influenced by westerly winds crossing arid regions, experiences more stringent environmental selection, which may reduce microbial diversity . These results highlight potential differences in microbial diversity between Hailuogou Glacier and Urumqi Glacier No.1, although distinguishing the exact factors contributing to these patterns requires further research involving additional glacier sites and detailed seasonal sampling. Along GFC, contrasting successional patterns in soil bacterial communities were evident between the Hailuogou Glacier and the Urumqi Glacier No.1. In glacier forefields, the retreating chronosequence has been identified as a significant factor in shaping microbial diversity and community structure . However, previous studies have yielded conflicting results on whether soil microbial diversity increases, decreases, or remains unchanged along GFC , although increasing α-diversity is a commonly observed successional pattern due to the increasing potential niches, resource availabilities, and habitat heterogeneity . The reason for the inconsistent patterns is that microbial community succession is strongly and collectively controlled by a wide variety of factors in glacier forefields, such as successional stage, deglaciation time, moisture, vegetation, and nutrients . Moreover, these factors contributed differentially to bacterial community succession at different stages . Along GFC, soil properties exhibited distinct patterns, with a decrease in pH accompanied by increases in nutrients, organic carbon, and moisture levels (Fig. ) , accompanied by a general increase in alpha-diversity and shifting microbial community composition . As vegetation develops, plants appear to compete effectively against soil microbes for available nitrogen in the low-N environment of glacier forefields, limiting microbial growth and productivity . Decreasing diversity arising from interspecific competition also occurs during primary succession in other systems . Previous studies have identified three key processes driving the formation of biological communities after glacier retreat: habitat filtering, biotic interactions, and time-mediated processes . However, consensus on the relative importance of these processes remains elusive, potentially due to their varying significance across different life forms and geographic regions . In our study, the unimodal pattern for the Urumqi Glacier No.1 might suggest that the main driving force could shift from edaphic properties at the early stage to vegetation properties at later stages, with the middle stage experiencing less environmental stress and competition. Conversely, for the Hailuogou Glacier, plants succeed rapidly due to the warm and wet climate , and the decreasing bacterial α-diversity might suggest increasing influence of vegetation as inferred from known ecological interactions discussed above . Microbial community succession in glacier forefields progresses along GFC, not only varying in α-diversity but also shifting in community composition. In our study, high species turnover was found in glacier forefield soils for both glaciers, consistent with previous findings , indicating that bacterial community changes largely contributed by species turnover. Moreover, when comparing different glaciers, bacterial communities in glacier forefield soils of the Urumqi Glacier No.1 exhibited significantly higher turnover than those of the Hailuogou Glacier. This high turnover might be driven by substantial environment changes in glacier forefield soils during succession, particularly in the early stages . An alternative explanation is that the initial bacteria populations might not be adapted to the developing environments during succession, allowing immigrant populations to occupy the newly created ecological niches . At the phylum level, Proteobacteria, Bacteroidetes, Acidobacteria, and Actinobacteria were the predominant (average relative abundance > 5%) phyla in forefield soils for both glaciers, while Cyanobacteria and Gemmatimonadetes were also predominant for the Urumqi Glacier No.1. These phyla are also reported as predominant in forefield ecosystems of other glaciers, decreasing or increasing their relative abundance over successional time . In our study, the relative abundance of the dominant phyla rarely presented clear succession patterns (significantly correlated with the distance to glacier) in the glacier forefields (Fig. ). However, the successional pattern of bacterial community composition was evident at the OTU level and particularly confirmed by the co-occurrence network. In natural ecosystems, microorganisms frequently coexist within complex networks characterized by intense interactions, playing essential roles in community assembly . These interactions suggest underlying biological and/or biochemical relationships among microbes . Modularity, in particular, highlights the network’s tendency to form sub-clusters of nodes (OTUs here), reducing high-dimension communities into ecological modules (a group of densely connected nodes) . Modules can reveal ecological properties such as functional interactions, co-occurrence patterns, and shared responses to environmental conditions that are often overlooked when communities are solely studied through taxonomic groupings . In our study, the bacterial community networks showed a clear module structure in the glacier forefield soils for both glaciers. Each module consisted of OTUs enriched at a certain successional stage, revealing the succession pattern of bacterial communities dominated by different taxa along GFC. Glacier-Fed Streams In the framework of “Dual-Domain Primary Succession,” glacier-fed streams present a fascinating parallel to adjacent glacier forefield soils, exhibiting similar yet distinct successional trajectories of bacterial communities. Different successional trajectories of bacterial communities were also found in glacier-fed streams between the two studied glaciers. Our study highlights both the synchronicity and differentiation in the successional pathways of microbial communities in these interconnected ecosystems. Glacier-fed streams start succession in synchronization with their adjacent glacier forefield soils after glacier retreat. Similar to microbial communities in forefield soils, microorganisms in glacier-fed streams also come from various sources, some of which are shared with forefield soils, especially at the early successional stage. The pioneering microbes in glacier-fed streams originate from glaciers transported by meltwater, which contains diverse microbes from supraglacial, englacial, and subglacial habitats . Moreover, this microbial influx can be supplemented by contributions from groundwater and adjacent terrestrial environments . Some of these microorganisms can adhere to benthic substrates, forming biofilms that serve as diversity hotspots in glacier-fed streams . As discussed above, maritime glaciers may harbor a higher diversity of microbes in the glacier per se than continental glaciers , initializing a richer microbial assembly in the associated glacier-fed streams. In this study, the glacier-fed stream of the Hailuogou Glacier exhibited a higher bacterial α-diversity in benthic biofilms than that of the Urumqi Glacier No.1. In addition, the bacterial diversity in benthic biofilms in glacier-fed streams presented distinct longitudinal patterns, decreasing for the Hailuogou Glacier while increasing for the Urumqi Glacier No.1. Along the glacier-fed streams with increasing distance to the glacier, one of the most notable changes is the shift in the relative contributions of various water sources to stream discharge — from predominantly glacial meltwater to increasing influences of groundwater and surface runoff . A concomitant change is the shift of relative contributions of different microbial sources (groundwater and terrestrial) to glacier-fed streams . Accordingly, hydrological alterations dramatically re-structure stream habitat template, such as channel stability and water physicochemical properties (pH, temperature, DOC, nutrient) , acting as strong environmental filters for microbial communities in glacier-fed streams . For example, cold-adapted species are displaced by less cryophilic species along glacier-fed stream succession . In our study, the measured physicochemical variables exhibited negligible influences on bacterial α-diversity (Fig. ), highlighting the importance of hydrological or integrated environmental influences. Due to the intimate hydroecological connections between glaciers and glacier-fed streams, biodiversity in these streams is highly susceptible to glacier retreat . The increasing bacterial diversity in the glacier-fed stream of Urumqi Glacier No.1 aligns with previous studies linking rising biodiversity to enhanced channel stability and warmer water temperatures . In contrast, the glacier-fed stream of Hailuogou Glacier showed a decreasing diversity pattern (Fig. b) which may result from an initial high diversity contributed by multiple sources, alongside inferred effects of increasing stream flow, current velocity, and channel instability . The growth of benthic biofilms in glacier-fed streams of the maritime glacier appeared to be controlled primarily by increasingly harsh hydrological conditions . In these glacier-fed streams, Proteobacteria, Bacteroidetes, and Cyanobacteria were the predominant phyla for both glaciers. Deinococcus-Thermus was also predominant for the Urumqi Glacier No.1. These phyla are also abundant in other glacier-fed streams . Some of the phyla showed clear successional patterns. For the Hailuogou Glacier, the relative abundance of Cyanobacteria decreased, while Bacteroidetes increased along the glacier-fed stream. For the Urumqi Glacier No.1, α-Proteobacteria increased the relative abundance while γ-Proteobacteria decreased along the glacier-fed stream. Along GFC, groundwater-meltwater interactions and soil–water interactions affect glacier-fed streams, shaping microbial diversity and community structure, and resulting in distinct microbial communities at various sampling points (different successional stages) along glacier-fed streams . In addition, microbial variation pattern could be due to differences in nutrient requirements and environmental tolerances among microbial communities, where certain taxa may thrive under specific nutrient conditions or flow dynamics, while others may be more sensitive to these environmental changes . Similar to glacier forefield soils, bacterial community composition differences along glacier-fed streams were markedly presented at the OTU level. Bacterial communities were dominated by different OTUs at different successional stages. These successional properties were also confirmed by the module structure of the co-occurrence networks as discussed above of glacier forefield soils. Differences and Relationships Between Soil and Stream in Glacier Forefields The concept of “Dual-Domain Primary Succession” emerges prominently in our study, highlighting the interconnected yet distinct successional trajectories of terrestrial and stream ecosystems in glacier forefields. Soil and stream microbial communities in glacier forefields may share some initial taxa due to shared glacial sources. However, the distinct hydrological influences — such as channelized subglacial flow in streams versus distributed hydrological inputs in soils — along with biological hotspots beneath the glacier, likely establish differences in initial community composition, that are further shaped by each environment’s specific conditions. In our study, both differences and relationships were evident between soil and stream ecosystems in glacier forefields. In glacier forefields, we observed that soils generally exhibited higher bacterial richness compared to adjacent glacier-fed streams. This difference can be attributed to the role of soils as a primary bacterial source for streams, a relationship supported by previous studies . In addition to the similar successional patterns presented in the glacier-fed streams and their adjacent forefield soils, a coupling relationship was evidenced by the high number of shared OTUs between them (Fig. d), arising from the same initial microbial sources as well as land–water transfer of microorganisms . Some bacteria generating the stream communities reside in the surrounding terrestrial ecosystem, delivered to stream through surface runoff . This initial overlap in microbial communities underscores the concept of “Dual-Domain Primary Succession,” where terrestrial and aquatic habitats, though distinct, commence their successional pathways from a common microbial foundation. Interestingly, our findings revealed a significant decrease in shared OTUs between soil and stream samples, particularly for the Hailuogou Glacier along GFC. This pattern suggests that the glacial and terrestrial signatures on stream bacterial communities diminish along the succession of glacier forefield, especially for the Hailuogou Glacier. Additionally, we noted an increase in β-diversity between paired soil and stream samples along the GFC, especially marked in Hailuogou Glacier. The decreasing shared OTUs and increasing β-diversity suggested that bacterial communities in these distinct ecosystems are more divergent during ecological succession, highlighting the core of the “Dual-Domain Primary Succession” concept. It demonstrates that while glacier forefield soils and streams share common microbial sources in the initial succession stages, their ecological pathways increasingly diverge, reflecting the distinct environmental influences and evolutionary pressures in each domain. Contrasting with studies focusing on homogenization of biological communities in similar ecosystem types , our research revealed a trend towards differentiation and specialization within the unique contexts of soil and stream ecosystems in glacier forefields. This divergence is a hallmark of the “Dual-Domain Primary Succession,” illustrating the nuanced and complex nature of ecological succession in these environments. Study Limitations While our results provide initial support for the “Dual-Domain Primary Succession” concept of microbial communities in glacier forefield soils and streams, several limitations (not all) should be acknowledged. The study was limited to only two glaciers (one maritime and one continental) in a single campaign due to limited funding resources, restricting the generalizability of our conclusions. Examining additional glaciers across various climatic regions and glacier types would provide a more comprehensive assessment of the “Dual-Domain Primary Succession” concept. On the other hand, using composite samples restricts detailed statistical analysis and may overlook fine-scale heterogeneity, although it provides an overview of microbial community composition. This approach was necessary due to time and logistical constraints in remote glacier environments. Future studies should include more individual replicates to enable more robust statistics and capture within-site variability. Additionally, our study measured a subset of environmental parameters, focusing on microbial diversity, selected soil properties, and stream characteristics. However, other factors such as vegetation coverage and more detailed nutrient profiling could further inform microbial dynamics in these habitats. Incorporating vegetation data could enhance our understanding of its influence on microbial community succession, as plant colonization may affect soil properties and microbial community composition through increased organic inputs and root-associated microbial interactions. Moreover, the absence of direct dating methods for soils in this study limits our ability to accurately determine the age of soil substrates along the GFC. Implementing precise dating techniques, such as radiocarbon dating, would allow for a more accurate assessment of substrate age and successional stage, enhancing the understanding of microbial succession over time. For stream biofilm, incorporating hydrograph data can also benefit our understanding of microbial succession in glacier-fed streams. While our study introduces the “Dual-Domain Primary Succession” concept, it should be seen as a starting point for further research rather than a comprehensive validation. These limitations suggest directions for our future research to build on the current findings and provide a more comprehensive view of “dule-domain” ecological succession in glacier forefields. In glacier forefield soils, the Hailuogou Glacier exhibited a significantly higher bacterial α-diversity compared to the Urumqi Glacier No.1, with a decreasing pattern along the glacier forefield chronosequence (GFC) for the Hailuogou Glacier and a unimodal distribution for the Urumqi Glacier No.1. These differences suggest that bacterial communities follow distinct successional trajectories, influenced by local environmental conditions. While pioneer microbial communities in newly deglaciated soils originate from various sources, including supraglacial and subglacial habitats and atmospheric transport , the diversity of these communities can be shaped by factors such as the glacier’s climate regime and catchment position . The Hailuogou Glacier, situated at a lower elevation and influenced by monsoon winds, receives microbial inputs enriched by diverse surrounding ecosystems . In contrast, the Urumqi Glacier No.1, located at a higher elevation and influenced by westerly winds crossing arid regions, experiences more stringent environmental selection, which may reduce microbial diversity . These results highlight potential differences in microbial diversity between Hailuogou Glacier and Urumqi Glacier No.1, although distinguishing the exact factors contributing to these patterns requires further research involving additional glacier sites and detailed seasonal sampling. Along GFC, contrasting successional patterns in soil bacterial communities were evident between the Hailuogou Glacier and the Urumqi Glacier No.1. In glacier forefields, the retreating chronosequence has been identified as a significant factor in shaping microbial diversity and community structure . However, previous studies have yielded conflicting results on whether soil microbial diversity increases, decreases, or remains unchanged along GFC , although increasing α-diversity is a commonly observed successional pattern due to the increasing potential niches, resource availabilities, and habitat heterogeneity . The reason for the inconsistent patterns is that microbial community succession is strongly and collectively controlled by a wide variety of factors in glacier forefields, such as successional stage, deglaciation time, moisture, vegetation, and nutrients . Moreover, these factors contributed differentially to bacterial community succession at different stages . Along GFC, soil properties exhibited distinct patterns, with a decrease in pH accompanied by increases in nutrients, organic carbon, and moisture levels (Fig. ) , accompanied by a general increase in alpha-diversity and shifting microbial community composition . As vegetation develops, plants appear to compete effectively against soil microbes for available nitrogen in the low-N environment of glacier forefields, limiting microbial growth and productivity . Decreasing diversity arising from interspecific competition also occurs during primary succession in other systems . Previous studies have identified three key processes driving the formation of biological communities after glacier retreat: habitat filtering, biotic interactions, and time-mediated processes . However, consensus on the relative importance of these processes remains elusive, potentially due to their varying significance across different life forms and geographic regions . In our study, the unimodal pattern for the Urumqi Glacier No.1 might suggest that the main driving force could shift from edaphic properties at the early stage to vegetation properties at later stages, with the middle stage experiencing less environmental stress and competition. Conversely, for the Hailuogou Glacier, plants succeed rapidly due to the warm and wet climate , and the decreasing bacterial α-diversity might suggest increasing influence of vegetation as inferred from known ecological interactions discussed above . Microbial community succession in glacier forefields progresses along GFC, not only varying in α-diversity but also shifting in community composition. In our study, high species turnover was found in glacier forefield soils for both glaciers, consistent with previous findings , indicating that bacterial community changes largely contributed by species turnover. Moreover, when comparing different glaciers, bacterial communities in glacier forefield soils of the Urumqi Glacier No.1 exhibited significantly higher turnover than those of the Hailuogou Glacier. This high turnover might be driven by substantial environment changes in glacier forefield soils during succession, particularly in the early stages . An alternative explanation is that the initial bacteria populations might not be adapted to the developing environments during succession, allowing immigrant populations to occupy the newly created ecological niches . At the phylum level, Proteobacteria, Bacteroidetes, Acidobacteria, and Actinobacteria were the predominant (average relative abundance > 5%) phyla in forefield soils for both glaciers, while Cyanobacteria and Gemmatimonadetes were also predominant for the Urumqi Glacier No.1. These phyla are also reported as predominant in forefield ecosystems of other glaciers, decreasing or increasing their relative abundance over successional time . In our study, the relative abundance of the dominant phyla rarely presented clear succession patterns (significantly correlated with the distance to glacier) in the glacier forefields (Fig. ). However, the successional pattern of bacterial community composition was evident at the OTU level and particularly confirmed by the co-occurrence network. In natural ecosystems, microorganisms frequently coexist within complex networks characterized by intense interactions, playing essential roles in community assembly . These interactions suggest underlying biological and/or biochemical relationships among microbes . Modularity, in particular, highlights the network’s tendency to form sub-clusters of nodes (OTUs here), reducing high-dimension communities into ecological modules (a group of densely connected nodes) . Modules can reveal ecological properties such as functional interactions, co-occurrence patterns, and shared responses to environmental conditions that are often overlooked when communities are solely studied through taxonomic groupings . In our study, the bacterial community networks showed a clear module structure in the glacier forefield soils for both glaciers. Each module consisted of OTUs enriched at a certain successional stage, revealing the succession pattern of bacterial communities dominated by different taxa along GFC. In the framework of “Dual-Domain Primary Succession,” glacier-fed streams present a fascinating parallel to adjacent glacier forefield soils, exhibiting similar yet distinct successional trajectories of bacterial communities. Different successional trajectories of bacterial communities were also found in glacier-fed streams between the two studied glaciers. Our study highlights both the synchronicity and differentiation in the successional pathways of microbial communities in these interconnected ecosystems. Glacier-fed streams start succession in synchronization with their adjacent glacier forefield soils after glacier retreat. Similar to microbial communities in forefield soils, microorganisms in glacier-fed streams also come from various sources, some of which are shared with forefield soils, especially at the early successional stage. The pioneering microbes in glacier-fed streams originate from glaciers transported by meltwater, which contains diverse microbes from supraglacial, englacial, and subglacial habitats . Moreover, this microbial influx can be supplemented by contributions from groundwater and adjacent terrestrial environments . Some of these microorganisms can adhere to benthic substrates, forming biofilms that serve as diversity hotspots in glacier-fed streams . As discussed above, maritime glaciers may harbor a higher diversity of microbes in the glacier per se than continental glaciers , initializing a richer microbial assembly in the associated glacier-fed streams. In this study, the glacier-fed stream of the Hailuogou Glacier exhibited a higher bacterial α-diversity in benthic biofilms than that of the Urumqi Glacier No.1. In addition, the bacterial diversity in benthic biofilms in glacier-fed streams presented distinct longitudinal patterns, decreasing for the Hailuogou Glacier while increasing for the Urumqi Glacier No.1. Along the glacier-fed streams with increasing distance to the glacier, one of the most notable changes is the shift in the relative contributions of various water sources to stream discharge — from predominantly glacial meltwater to increasing influences of groundwater and surface runoff . A concomitant change is the shift of relative contributions of different microbial sources (groundwater and terrestrial) to glacier-fed streams . Accordingly, hydrological alterations dramatically re-structure stream habitat template, such as channel stability and water physicochemical properties (pH, temperature, DOC, nutrient) , acting as strong environmental filters for microbial communities in glacier-fed streams . For example, cold-adapted species are displaced by less cryophilic species along glacier-fed stream succession . In our study, the measured physicochemical variables exhibited negligible influences on bacterial α-diversity (Fig. ), highlighting the importance of hydrological or integrated environmental influences. Due to the intimate hydroecological connections between glaciers and glacier-fed streams, biodiversity in these streams is highly susceptible to glacier retreat . The increasing bacterial diversity in the glacier-fed stream of Urumqi Glacier No.1 aligns with previous studies linking rising biodiversity to enhanced channel stability and warmer water temperatures . In contrast, the glacier-fed stream of Hailuogou Glacier showed a decreasing diversity pattern (Fig. b) which may result from an initial high diversity contributed by multiple sources, alongside inferred effects of increasing stream flow, current velocity, and channel instability . The growth of benthic biofilms in glacier-fed streams of the maritime glacier appeared to be controlled primarily by increasingly harsh hydrological conditions . In these glacier-fed streams, Proteobacteria, Bacteroidetes, and Cyanobacteria were the predominant phyla for both glaciers. Deinococcus-Thermus was also predominant for the Urumqi Glacier No.1. These phyla are also abundant in other glacier-fed streams . Some of the phyla showed clear successional patterns. For the Hailuogou Glacier, the relative abundance of Cyanobacteria decreased, while Bacteroidetes increased along the glacier-fed stream. For the Urumqi Glacier No.1, α-Proteobacteria increased the relative abundance while γ-Proteobacteria decreased along the glacier-fed stream. Along GFC, groundwater-meltwater interactions and soil–water interactions affect glacier-fed streams, shaping microbial diversity and community structure, and resulting in distinct microbial communities at various sampling points (different successional stages) along glacier-fed streams . In addition, microbial variation pattern could be due to differences in nutrient requirements and environmental tolerances among microbial communities, where certain taxa may thrive under specific nutrient conditions or flow dynamics, while others may be more sensitive to these environmental changes . Similar to glacier forefield soils, bacterial community composition differences along glacier-fed streams were markedly presented at the OTU level. Bacterial communities were dominated by different OTUs at different successional stages. These successional properties were also confirmed by the module structure of the co-occurrence networks as discussed above of glacier forefield soils. The concept of “Dual-Domain Primary Succession” emerges prominently in our study, highlighting the interconnected yet distinct successional trajectories of terrestrial and stream ecosystems in glacier forefields. Soil and stream microbial communities in glacier forefields may share some initial taxa due to shared glacial sources. However, the distinct hydrological influences — such as channelized subglacial flow in streams versus distributed hydrological inputs in soils — along with biological hotspots beneath the glacier, likely establish differences in initial community composition, that are further shaped by each environment’s specific conditions. In our study, both differences and relationships were evident between soil and stream ecosystems in glacier forefields. In glacier forefields, we observed that soils generally exhibited higher bacterial richness compared to adjacent glacier-fed streams. This difference can be attributed to the role of soils as a primary bacterial source for streams, a relationship supported by previous studies . In addition to the similar successional patterns presented in the glacier-fed streams and their adjacent forefield soils, a coupling relationship was evidenced by the high number of shared OTUs between them (Fig. d), arising from the same initial microbial sources as well as land–water transfer of microorganisms . Some bacteria generating the stream communities reside in the surrounding terrestrial ecosystem, delivered to stream through surface runoff . This initial overlap in microbial communities underscores the concept of “Dual-Domain Primary Succession,” where terrestrial and aquatic habitats, though distinct, commence their successional pathways from a common microbial foundation. Interestingly, our findings revealed a significant decrease in shared OTUs between soil and stream samples, particularly for the Hailuogou Glacier along GFC. This pattern suggests that the glacial and terrestrial signatures on stream bacterial communities diminish along the succession of glacier forefield, especially for the Hailuogou Glacier. Additionally, we noted an increase in β-diversity between paired soil and stream samples along the GFC, especially marked in Hailuogou Glacier. The decreasing shared OTUs and increasing β-diversity suggested that bacterial communities in these distinct ecosystems are more divergent during ecological succession, highlighting the core of the “Dual-Domain Primary Succession” concept. It demonstrates that while glacier forefield soils and streams share common microbial sources in the initial succession stages, their ecological pathways increasingly diverge, reflecting the distinct environmental influences and evolutionary pressures in each domain. Contrasting with studies focusing on homogenization of biological communities in similar ecosystem types , our research revealed a trend towards differentiation and specialization within the unique contexts of soil and stream ecosystems in glacier forefields. This divergence is a hallmark of the “Dual-Domain Primary Succession,” illustrating the nuanced and complex nature of ecological succession in these environments. While our results provide initial support for the “Dual-Domain Primary Succession” concept of microbial communities in glacier forefield soils and streams, several limitations (not all) should be acknowledged. The study was limited to only two glaciers (one maritime and one continental) in a single campaign due to limited funding resources, restricting the generalizability of our conclusions. Examining additional glaciers across various climatic regions and glacier types would provide a more comprehensive assessment of the “Dual-Domain Primary Succession” concept. On the other hand, using composite samples restricts detailed statistical analysis and may overlook fine-scale heterogeneity, although it provides an overview of microbial community composition. This approach was necessary due to time and logistical constraints in remote glacier environments. Future studies should include more individual replicates to enable more robust statistics and capture within-site variability. Additionally, our study measured a subset of environmental parameters, focusing on microbial diversity, selected soil properties, and stream characteristics. However, other factors such as vegetation coverage and more detailed nutrient profiling could further inform microbial dynamics in these habitats. Incorporating vegetation data could enhance our understanding of its influence on microbial community succession, as plant colonization may affect soil properties and microbial community composition through increased organic inputs and root-associated microbial interactions. Moreover, the absence of direct dating methods for soils in this study limits our ability to accurately determine the age of soil substrates along the GFC. Implementing precise dating techniques, such as radiocarbon dating, would allow for a more accurate assessment of substrate age and successional stage, enhancing the understanding of microbial succession over time. For stream biofilm, incorporating hydrograph data can also benefit our understanding of microbial succession in glacier-fed streams. While our study introduces the “Dual-Domain Primary Succession” concept, it should be seen as a starting point for further research rather than a comprehensive validation. These limitations suggest directions for our future research to build on the current findings and provide a more comprehensive view of “dule-domain” ecological succession in glacier forefields. This study investigated “Dual-Domain Primary Succession” in glacier forefields of the Urumqi Glacier No.1 and Hailuogou Glacier, examining simultaneous yet distinct ecological successions in soil and stream ecosystems following glacier retreat. It unveiled complex ecological dynamics influenced by environmental and biological factors, with unique successional patterns in these interconnected domains. Soil ecosystems showed varying bacterial successional patterns between Hailuogou Glacier and Urumqi Glacier No.1. The Hailuogou Glacier had a higher but declining bacterial α-diversity, while the Urumqi Glacier No.1 displayed a unimodal diversity pattern. These differences highlight the impact of glacier-specific factors like climate and position on microbial diversity, along with high species turnover and distinct microbial structures at various levels. Similarly, glacier-fed streams presented unique successional trajectories. Streams from the Hailuogou Glacier initially had higher bacterial diversity, reducing over time due to hydrological changes, whereas the glacier-fed stream of the Urumqi Glacier No.1 showed increasing bacterial diversity, influenced by water source alterations and stream stability. The study emphasized the interconnected yet divergent evolutionary paths of terrestrial and stream ecosystems, initially sharing microbes but later following different developmental routes due to their ecological processes and habitat characteristics. This divergence was evident in the higher bacterial richness in soils, decreasing shared OTUs, and increasing β-diversity along glacier forelands. The “Dual-Domain Primary Succession” concept offers valuable insights into the distinct pathways of ecological succession in terrestrial and aquatic environments in glacier forefields, underscoring the importance of considering both domains in studies of glacier retreat. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 2032 KB) |
Management of a malignant solitary fibrous tumor of lung by uniportal video-assisted pneumonectomy: a case report | 2b1e5daf-ce9b-4540-b961-78de8279c0b4 | 11844062 | Surgical Procedures, Operative[mh] | Solitary fibrous tumor (SFT) was first described by Klemperer and Robin in 1931 . It is a rare condition that originates from dendritic stromal cells. Most patients have no obvious symptoms. However, some individuals with a large tumor may have diverse syndrome, such as thoracalgia, dyspnea, and cough. Patients with SFT of the lung are treated using surgical options, including lung wedge resection and lobectomy, but rarely pneumonectomy. In the present case report, a 35-year-old patient with a low malignant SFT was assessed using three-dimensional computed tomography (3D-CT) reconstruction before surgery and underwent a complete surgical resection via uniportal video-assisted pneumonectomy. A 35-year-old male patient complained of mild thoracalgia and dyspnea lasting for more than a month that did not respond to oral medication. There were no other obvious findings during a physical examination. The patient had no relevant previous medical and family history. The patient also reported a large 6.7 × 4.8 cm lesion located in the left lung lobe, which was closely related to the left pulmonary arteries on an enhanced chest CT evaluation (Fig. A, B). A 3D-CT reconstruction carried out by Mimics Medical 21.0 indicated an obvious compression and invasion of the surrounding blood vessels and a mediastinal and tracheal shift (Fig. C, D). Bronchoscopy showed that the tracheobronchial airway was compressed by an extratracheal lesion. Uniportal video-assisted thoracoscopic surgery was carried out through a 3.5-cm incision in the fifth intercostal anterior axillary space after administering general anesthesia with a right double-lumen tube. A tumor was discovered on the left interlobar fissure. It invaded the great vessels in the hilus region of the lung and grew across the interlobar fissure. Since a frozen section tumor biopsy indicated malignancy during the operation, the left pneumonectomy was selected as the appropriate treatment procedure. Then, the left pulmonary arterial trunk (LPAT) was dissected, exposed, and controlled proximally using a vascular tourniquet. The left superior pulmonary vein and inferior pulmonary vein were also exposed and similarly controlled proximally using a vascular tourniquet (Fig. A–C). Although the interlobar pulmonary artery was inadvertently injured during the surgery, the LPAT was already controlled proximally to avoid uncontrolled arterial bleeding. The left pulmonary arteries and veins were ligated and left principal bronchus was clamped using staplers. The tumor was then removed (Fig. A) and a pathological diagnosis of malignant SFT was confirmed (Fig. B). Tumor cells were spindle-shaped and arranged in whorls or demonstrated a hemangiopericytoma-like conformation. Atypia and mitotic figures were found. Immunohistochemistry showed positive CD34 and STAT6 expression. Mediastinal lymph node dissection was identified. SFTs typically occur when mesenchymal cells are located beneath the mesothelial lining of the pleura . Therefore, the majority of SFTs grow slowly. Malignant SFTs account for approximately 80% of all SFTs cases and their five-year survival is 81% . Most patients with benign SFTs are asymptomatic. However, malignant SFTs are usually more aggressive than benign tumors and may cause chest tightness, pain, dyspnea, and respiratory insufficiency when compressing the adjacent trachea and lung tissue . A malignant SFT showing invasion and severe peritumoral adhesion or originating from the visceral pleural fold at the interlobar fissure may resemble a malignant pulmonary mass rather than a pleural tumor . Because the tumor was located in the hilus of the left pulmonary in the present case, a CT-guided puncture before surgery was dangerous and unnecessary. It is difficult to distinguish between a malignant SFT and lung cancer before the surgery. Therefore, performing frozen section biopsy during the operation is critical. Three aspects of the treatment described in the present case were noteworthy. First, 3D-CT technology helped to illustrate the relationship between the tumor and its adjacent organs and important blood vessels. Because 3D-CT reconstruction revealed that the great vessels in the hilus region of the lung were infiltrated by a tumor, a lobectomy or pneumonectomy had to be chosen for the treatment. Precise 3D-CT reconstructions can analyze the risks before the surgery and predict an appropriate operative strategy for the surgeons. Second, because the tumor invaded left pulmonary arteries and veins and preoperative evaluation did not exclude the possibility of intraoperative hemorrhage, controlling the left pulmonary trunk allowed the distally involved pulmonary parenchyma to be safely resected during the surgery. Therefore, it is necessary to control the main pulmonary arterial trunk during such an operation. Third, surgical resection is an acceptable treatment for SFT. In the present case, the SFT invaded the hilus of the left pulmonary blood vessels and interlobar fissure and the left pneumonectomy was chosen as the treatment. Recurrence and metastasis via hematogenous and lymphogenous routes are both typical features of malignant SFTs . Mass excision with a tumor-negative margin is typically suggested due to the SFT’s malignant and recurring capacity. Larger and more aggressive tumors are associated with malignancy, making tumor size indicative of malignancy potential . In addition, if the tumor invades the lung parenchyma, chest wall, pericardium, and diaphragm, resection of a part of the chest wall, pericardium, and diaphragm, lobectomy, and even pneumonectomy are recommended . Thus, the choice of surgery area is affected by SFT size and location as well as the state of tumor invasion. In general, an SFT is a rare condition. The 3D-CT reconstruction may help to identify an appropriate operative strategy for surgeons. It is necessary to control the main pulmonary arterial trunk to avoid hemorrhage when preoperative evaluation does not exclude the possibility of intraoperative hemorrhage. The choice of surgery area is affected by SFT size and location. |
The Model of an Ischemic Non-Healing Wound: Regeneration after Transplantation of a Living Skin | e33aef8d-2d0c-47d4-815e-965b3cf064a2 | 11832069 | Surgical Procedures, Operative[mh] | Normally, regeneration of the cutaneous damage is completed by a full restoration of the skin structure and functions; however, when infection, hypoxia, or immune dysfunction are added, the wound may acquire the status of a chronic non-healing lesion . These wounds are characterized by excessive inflammation, increased level of proteolytic activity, and delayed matrix deposition . The regeneration stages of a non-healing wound are the same as for the normally healing wound, although with a significant delay in the inflammatory phase . Diabetes, vascular insufficiency, exhaustion, elderly age, local infection, and compression-induced necrosis are referred to the factors of wound chronicity . Depending on the disease genesis, the following types of non-healing wound are distinguished: bed sores, diabetic ulcers, and ischemic non-healing wounds. The latter are resistant to conservative therapy. One of the promising directions in managing nonhealing wounds is transplantation of biomedical cell products (BCP) of the skin equivalent type. This treatment contributes to effective remodeling of the granulation tissue and removal of a cosmetic defect, one of the wound chronicity sequalae . The development of a model of ischemic non-healing wound adequate to human pathology is an integral part of researches directed towards the creation of BCP. Differences in the structure between the human skin and that of laboratory animals cause difficulties for the creation of similar models of ischemic non-healing wounds . Researchers often use synthetic constructions and materials to make the process of regeneration in the animal model similar to the human pathology, reducing thereby the adequacy of the models. For example, presence of sutures in the immediate proximity to the wound causes a significant background during the work, therefore, the results are difficult to interpret. All this is one of the reasons of a great variety of models of ischemic non-healing wounds on the laboratory animals . Presently, there are several patented models for the non-healing wounds in Russia, but there is no model, which would be perfectly suitable for conducting preclinical BCP studies. The aim is to evaluate the possibility of using the ischemic non-healing wound model, developed in our laboratory, for preclinical studies of biomedical cell products during transplantation of a tissue-engineered construct. The tasks of the study are: to conduct the experiment on transplantation of tissue-engineered construct “living skin equivalent” (LSE) representing an epidermal-mesenchymal layer on a carrier, and to select methods for determining the effectiveness of treating ischemic non-healing wounds during preclinical studies on the proposed model. Cell cultures The cell cultures of keratinocytes and mesenchymal stem cells (MSCs) obtained from Cell Culture Collection of Koltzov Institute of Developmental Biology of Russian Academy of Sciences (Moscow, Russia) were used for LSE preparation. Keratinocytes were cultivated on the DMEM/F-12 (PanEco, Russia) in the 1:1 ratio with fetal bovine serum (HyClone, USA), 5 pg/ml insulin (Sigma-Aldrich, USA), 10-6M isoproterenol (Sigma-Aldrich), 5 pg/ml transferrin (Sigma-Aldrich), 10 ng/ml EGF (Sigma-Aldrich), and 1% penicillin-streptomycin (Gibco, USA). MSCs were cultivated on the DMEM (PanEco) containing 10% of fetal bovine serum (HyClone) and 1% penicillin-streptomycin (Gibco). To produce the biological skin equivalent (LSE), mouse keratinocytes and MSCs were cultivated on the collagen-hyaluronic film. Cells on the scaffold were ready to be transferred to the wound after 2-3 days. The work with animals All manipulations with animals were done under general anesthesia in compliance with the Rules for the Work using Experimental Animals (Russia, 2010) and International Guiding Principles for Biomedical Research Involving Animals (CIOMS and ICLAS, 2012); the ethical principles of the European Convention for the Protection of Vertebrata used for Experimental and Other Specific Purposes (Strasburg, 2006) were strictly followed. The study was approved by the Bioethical Committee of Koltzov Institute of Developmental Biology of Russian Academy of Sciences (protocols No.23 of November 15, 2018 and No.28 of September 5, 2019). Animals were housed with free access to food and water. The study was performed on 56 BALB/c mice. The mice were divided into the following groups: “control” (n=19), “scaffold” (n=19), and “LSE” (n=18). Before manipulations, the animals underwent general anesthesia with Avertin. Fur was removed in the area of the operation field with a hair removal cream (Veet, Canada). After depilation, a 30*10-mm rectangle was marked on the mouse skin, in the center of which a full-thickness circular opening with 5-7 mm in diameter was excised. Next, full-thickness parallel skin incisions were done along the marked lines and all large vessels were cut off the flap. Bleeding was arrested by applying hydrogen peroxide to the ligated vessels. The flap margins were sutured, the wound was washed, and Tegaderm™ plaster (Germany) was applied. Transplantation of the scaffold and LSE in the appropriate groups was performed by applying these materials on the wound bed of the animals. After the operation, the wound was covered with a Tegaderm™ plaster. On days 3-5, the wounds of mice in the control, scaffold, and LSE groups were washed with a sterile 0.1% solution of gentamycin on DPBS (PanEco). The animals were withdrawn from the experiment on days 5, 7, 14, and 21 by narcosis overdosage. Biomaterial preparation The biomaterial for histological investigations was fixed in 10% formaldehyde (Biovitrum, Sweden). The biomaterial for immunohistochemical investigations was placed into the OCT Cryomount gel (HistoLab, Sweden) and frozen in liquid nitrogen. Histological investigation The material embedded into paraffin blocks was used for histological investigations. Histological sections were obtained using the Microm HM 430 microtome (Thermo Fisher Scientific, USA). Preparations were stained with hematoxylin-eosin and Mallory’s trichrome stain. Immunohistochemical staining The cryosections were prepared using the standard technique on the CM1900 cryostat (Leica Microsystems, Germany). Fixation was done with 4% paraformaldehyde. The block-solution, in which antibodies were diluted, contained 5% serum of the animal producing secondary antibodies. Preparations were incubated with primary antibodies overnight. Further, the secondary antibodies and DAPI solution were applied on the preparations, which were placed into the BrightMount/Plus medium (Abcam, UK). The following antibodies were used in the work: rabbit primary monoclonal antibodies against Krt14 (1:200; Ab197893; Abcam) and rat primary monoclonal antibodies against CD31 (1:100; Ab56299; Abcam), and also goat anti-rat secondary antibodies AlexaFluor 488 (1:600; Ab150157; Abcam) and donkey anti-rat secondary antibodies AlexaFluor 488 (1:500; A-21206; Invitrogen, USA). Preparations were viewed and photographed using microscopes BZ-9000E (Keyence, Japan) and IX73 (Olympus, Japan). Raster scanning optoacoustic mesoscopy The dynamics of the wound healing process was studied using the RSOM Explorer P50 mesoscope (iThera Medical, Germany). Morphometry and statistical analysis The morphometric analysis was performed by means of ImageJ program. Data were analyzed in the Excel program using R programming language and RStudio environment. The Kruskal-Wallis nonparametric test for multiple comparisons was applied for comparative data analysis. Comparisons between the groups were performed by means of Dunn’s test. Differences were considered statistically significant at p<0.05. The cell cultures of keratinocytes and mesenchymal stem cells (MSCs) obtained from Cell Culture Collection of Koltzov Institute of Developmental Biology of Russian Academy of Sciences (Moscow, Russia) were used for LSE preparation. Keratinocytes were cultivated on the DMEM/F-12 (PanEco, Russia) in the 1:1 ratio with fetal bovine serum (HyClone, USA), 5 pg/ml insulin (Sigma-Aldrich, USA), 10-6M isoproterenol (Sigma-Aldrich), 5 pg/ml transferrin (Sigma-Aldrich), 10 ng/ml EGF (Sigma-Aldrich), and 1% penicillin-streptomycin (Gibco, USA). MSCs were cultivated on the DMEM (PanEco) containing 10% of fetal bovine serum (HyClone) and 1% penicillin-streptomycin (Gibco). To produce the biological skin equivalent (LSE), mouse keratinocytes and MSCs were cultivated on the collagen-hyaluronic film. Cells on the scaffold were ready to be transferred to the wound after 2-3 days. All manipulations with animals were done under general anesthesia in compliance with the Rules for the Work using Experimental Animals (Russia, 2010) and International Guiding Principles for Biomedical Research Involving Animals (CIOMS and ICLAS, 2012); the ethical principles of the European Convention for the Protection of Vertebrata used for Experimental and Other Specific Purposes (Strasburg, 2006) were strictly followed. The study was approved by the Bioethical Committee of Koltzov Institute of Developmental Biology of Russian Academy of Sciences (protocols No.23 of November 15, 2018 and No.28 of September 5, 2019). Animals were housed with free access to food and water. The study was performed on 56 BALB/c mice. The mice were divided into the following groups: “control” (n=19), “scaffold” (n=19), and “LSE” (n=18). Before manipulations, the animals underwent general anesthesia with Avertin. Fur was removed in the area of the operation field with a hair removal cream (Veet, Canada). After depilation, a 30*10-mm rectangle was marked on the mouse skin, in the center of which a full-thickness circular opening with 5-7 mm in diameter was excised. Next, full-thickness parallel skin incisions were done along the marked lines and all large vessels were cut off the flap. Bleeding was arrested by applying hydrogen peroxide to the ligated vessels. The flap margins were sutured, the wound was washed, and Tegaderm™ plaster (Germany) was applied. Transplantation of the scaffold and LSE in the appropriate groups was performed by applying these materials on the wound bed of the animals. After the operation, the wound was covered with a Tegaderm™ plaster. On days 3-5, the wounds of mice in the control, scaffold, and LSE groups were washed with a sterile 0.1% solution of gentamycin on DPBS (PanEco). The animals were withdrawn from the experiment on days 5, 7, 14, and 21 by narcosis overdosage. The biomaterial for histological investigations was fixed in 10% formaldehyde (Biovitrum, Sweden). The biomaterial for immunohistochemical investigations was placed into the OCT Cryomount gel (HistoLab, Sweden) and frozen in liquid nitrogen. The material embedded into paraffin blocks was used for histological investigations. Histological sections were obtained using the Microm HM 430 microtome (Thermo Fisher Scientific, USA). Preparations were stained with hematoxylin-eosin and Mallory’s trichrome stain. The cryosections were prepared using the standard technique on the CM1900 cryostat (Leica Microsystems, Germany). Fixation was done with 4% paraformaldehyde. The block-solution, in which antibodies were diluted, contained 5% serum of the animal producing secondary antibodies. Preparations were incubated with primary antibodies overnight. Further, the secondary antibodies and DAPI solution were applied on the preparations, which were placed into the BrightMount/Plus medium (Abcam, UK). The following antibodies were used in the work: rabbit primary monoclonal antibodies against Krt14 (1:200; Ab197893; Abcam) and rat primary monoclonal antibodies against CD31 (1:100; Ab56299; Abcam), and also goat anti-rat secondary antibodies AlexaFluor 488 (1:600; Ab150157; Abcam) and donkey anti-rat secondary antibodies AlexaFluor 488 (1:500; A-21206; Invitrogen, USA). Preparations were viewed and photographed using microscopes BZ-9000E (Keyence, Japan) and IX73 (Olympus, Japan). The dynamics of the wound healing process was studied using the RSOM Explorer P50 mesoscope (iThera Medical, Germany). The morphometric analysis was performed by means of ImageJ program. Data were analyzed in the Excel program using R programming language and RStudio environment. The Kruskal-Wallis nonparametric test for multiple comparisons was applied for comparative data analysis. Comparisons between the groups were performed by means of Dunn’s test. Differences were considered statistically significant at p<0.05. During preliminary experiments, the model of a nonhealing wound, which reproduced ischemic conditions and was similar to human pathology, has been developed in our laboratory . However, it had some shortages, which hindered technically investigations connected with preclinical testing of BCP. For example, during flap formation, excessive bleeding to the wound bed was observed; besides, on days 7-14 the wounds suppurated. Moreover, the developed model did not have a covering, which would protect BCP in the wound bed. Therefore, some procedures were performed to optimize the model: bleeding from the ligated vessels was arrested with hydrogen oxide, the wound was washed with a sterile 1% gentamycin solution on DPBS during the operation and postoperative care, and the Tegaderm™ plaster was applied to the wound (see the ). In order to assess the suitability of the proposed model, an experiment for BCP testing was carried out. It included the transplantation of the tissue-engineered construct of LSE or the transplantation of the scaffold kept in the conditions similar to LSE. No transplantation was done to the mice of the control group. The Tegaderm™ plaster was used to cover the wounds of mice in all groups. The histological, immunohistochemical, and raster scanning optoacoustic mesoscopy (RSOM) methods were chosen to evaluate the effectiveness of treating ischemic non-healing wounds during preclinical studies on the proposed model and to compare the wound conditions. Characteristic of the wound process by histological and immunohistochemical methods The histological analysis has shown that the inflammatory phase in the above model lasts up to 5 days from getting the injury. At this stage, the wound bed of all mice was characterized by infiltration with inflammatory cells. On day 7-14 of wound healing, a proliferative phase was observed. During this phase, gradual formation of the granulation tissue was noted in the wound bed. On day 21, there came the phase of re-epithelialization and remodeling. The majority of animal wounds in all groups were characterized by mature scars and formed epithelium. The histological analysis allowed us to characterize in detail and compare the condition of all animal wounds according to the stages of wound healing. The following parameters were proposed as the criteria for assessing the effectivity of wound treatment using BCP: vascularization of the wound bed, infiltration of the wound bed with inflammatory cells, the state of the tissue-remodeling zone in the wound bed, as well as the number of hair follicles (HF) at the wound edges . Inflammation phase As it has been mentioned above, the lag in the wound process at the phase of inflammation and excessive inflammatory response underlie the acquisition of the non-healing status by the wound . Excessive infiltration of the wound bed with neutrophils is a key factor in the development of chronic inflammation and may be considered as a histological biomarker of chronic non-healing wounds . On day 5, the wound bed of many mice from the control group was filled with inflammatory infiltrate. Migration of the inflammatory cells to the adipose tissue and fascia under the wound bed and at the wound margins was noted. Besides, significant degradation of epidermis and derma at the wound margins and areas of the dead cells have been found . The wounds of many mice from the scaffold group had a similar histological picture . At the same time, moderate infiltration of the wound bed, adipose tissue, and fascia of the entire wound area with inflammatory cells was observed in the majority of mice from the LSE group . The comparison of the examined groups has demonstrated the tendency to the reduction in the area of the inflammatory infiltrate of the wound bed in the mice of the LSE group . Hence, we may suggest an immunomodulating effect of LSE owing to the MSCs in its composition. Proliferation phase Granulation tissue In the period of proliferation, maturation of the granulation tissue occurred in the mice of all groups. However, a number of essential differences in the formation of the granulation tissue was noted between the mice from the control and LSE groups. It is known that non-healing wounds are characterized by impairment in granulation tissue formation due to various reasons. For example, the overproduction of reactive oxygen species by neutrophils causes damage to extracellular matrix . TNF-a, inducing collagenase activity, is also supposed to inhibit normal scarring . In the control group, imitating a non-healing wound without treatment, a great difference was noted on the histological preparations between the thickness of dermis at the margins and the thickness of the tissueremodeling zone at the wound center in the majority of animals. The wound margins rose significantly above the wound bed on day 7 of the experiment . In the scaffold group, a similar picture was observed . On day 7, the thickness of the tissueremodeling zone in the wound bed in the majority of animals of the LSE group almost reached the wound margins . On day 14, the thickness of the tissue-remodeling zone in many animals of the control group was still significantly less than that of the dermis at the wound margins ; in the scaffold group, the thickness of the tissue-remodeling zone partially or fully reached the wound margins ; in the LSE group, the tissue-remodeling zone filled completely the wound bed by width and height in the majority of animals . The morphological middle of the wound in the mice of the control group on day 14 was characterized by a scanty amount of the tissue, while in the groups “scaffold” and “LSE” it was plentiful; the granulation tissue in the LSE group had well visualized fibers. To describe the effectiveness of treatment with LSE transplantation in respect of the effect on granulation, the term “smoothing coefficient” was introduced, which determines the ratio of the thickness of the tissue-remodeling zone to the thickness of the dermis of the wound margins. In the LSE group, the smoothing coefficient was statistically significantly higher than that in the groups “scaffold” and “control” on day 7, which spoke of the efficacy of LSE transplantation for removing the defect of the granulation tissue, which may result in cosmetic problems . On day 14, the smoothing coefficients in the groups “LSE” and “scaffold” were statistically significantly higher than in the control group, and at the same time did not differ statistically significantly from each other . Tissue condition at the wound margins. Hair follicles: death and regeneration On day 7, degenerative changes in the wound margin tissue continued in many animals of all groups. Due to the individual differences between the rates of regeneration, diverse variants of margin conditions were observed at a given point in time: moderate degradation interfollicular epidermis, dermis, and HF at the wound margins; mass death of these structures with subsequent formation of a scab; partial tissue regeneration. HF regeneration is known to occur in various ongoing processes: regeneration in microinjury, regeneration in case of the partial HF loss, wound-induced hair neogenesis typical for large full-thickness wounds in rodents , and also wound- induced anagen . In our work, it was not possible to identify the type of regeneration during which the HF restoration occurred in the process of ischemic chronic wound healing due to technical difficulties. In this connection, we use the term “HF regeneration” not categorizing its type. In our experiment, certain differences in the dynamics of degenerative processes in HF, their death and regeneration were noted in the animals of control and LSE groups. Thus, in many mice from the control group, there was noticed a mass HF death at the wound margins. The HF cells were characterized by marked degenerative changes and kariolysis . At the same time, in the LSE group, many HF located at the wound margins had either normal morphology or moderate signs of degeneration . An average amount of HF per a wound margin increased in the LSE group relative to the control group. It may speak of the fact that LSE contributed to the preservation of HF or accelerates the process of their regeneration . On day 14, HF of the majority of wounds had normal morphology; the number of HF in the groups did not differ statistically significantly. Thus, the rate of HF regeneration in all three groups became equal . Intensity of angiogenesis The quantitative analysis has revealed absence of statistically significant differences between the density of vessels in the wound bed in the mice of the three groups on day 7. Consequently, LSE does not influence angiogenesis . Scar remodeling and reepithelialization On day 21, the majority of the wounds in the animals of all groups were characterized by scar formation and reepithelialization . In many mice, fibers prevailed over the cell component in the scar tissue; besides, HF in the phase of a mature anagen were also observed. Therefore, the conclusion may be drawn on the full completion of the wound regeneration process. However, in some mice, the cell component prevailed over fibers in the wound bed, infiltration with inflammatory cells was noted, meaning that the process of wound healing is not finished. Both variants of wound healing were present in the mice of all groups. Thus, the rate of regeneration in the groups on day 21 became equal. Characteristics of the wound process using raster scanning optoacoustic mesoscopy Raster scanning optoacoustic mesoscopy is the latest non-invasive technology, which allows one to get high-resolution images. RSOM is scanning the skin area of interest with a transducer in parallel with illumination with a bundle of optic fibers. Optoacoustic waves generated in the tissue in response to the pulsed illumination are fixed; the acquired image is a 3D propagation of the absorbed light in the tissue. The reconstructed images demonstrate the distribution of melanin and hemoglobin in the epidermis and dermis making it possible to obtain images of microvascular skin system at the depth of 1-2 mm . Since RSOM is successfully used in the investigations of skin pathologies such as psoriasis and atopic eczema , this method was used in the present study to evaluate the dynamics of wound blood flow as a criterion of successful BCP transplantation. Measurements were performed on days 3, 5, 7, 10, 14, and 21. As a result of the experiment, it has been established that on day 10 the intensity of the signal at low frequencies in the LSE group exceeded statistically significantly that in the control group . This fact may be interpreted in favor of the idea that LSE contributes to angiogenesis during wound healing. In the process of examining the wound bed by means of RSOM, a number of technical difficulties have been experienced. Thus, the material, from which the scaffold for LSE cells was constructed, hindered the experiment and created a significant background. Besides, the transplant interfered with the observation of the underlying microvasculature condition, whereas the wound of mice without transplantation remained open, therefore, the comparison of the results obtained appeared to be incorrect. Not only the transplant but the plaster as well, removal of which created the risk of wound infection, prevented measurements. Besides, during measurements, the animal experiencing the load caused by blood loss had to remain under general anesthesia for a long time, which negatively influenced its state and often resulted in death. All this suggests the conclusion that RSOM cannot be recommended for the evaluation of the effectiveness of BCP transplantation during preclinical studies. The histological analysis has shown that the inflammatory phase in the above model lasts up to 5 days from getting the injury. At this stage, the wound bed of all mice was characterized by infiltration with inflammatory cells. On day 7-14 of wound healing, a proliferative phase was observed. During this phase, gradual formation of the granulation tissue was noted in the wound bed. On day 21, there came the phase of re-epithelialization and remodeling. The majority of animal wounds in all groups were characterized by mature scars and formed epithelium. The histological analysis allowed us to characterize in detail and compare the condition of all animal wounds according to the stages of wound healing. The following parameters were proposed as the criteria for assessing the effectivity of wound treatment using BCP: vascularization of the wound bed, infiltration of the wound bed with inflammatory cells, the state of the tissue-remodeling zone in the wound bed, as well as the number of hair follicles (HF) at the wound edges . As it has been mentioned above, the lag in the wound process at the phase of inflammation and excessive inflammatory response underlie the acquisition of the non-healing status by the wound . Excessive infiltration of the wound bed with neutrophils is a key factor in the development of chronic inflammation and may be considered as a histological biomarker of chronic non-healing wounds . On day 5, the wound bed of many mice from the control group was filled with inflammatory infiltrate. Migration of the inflammatory cells to the adipose tissue and fascia under the wound bed and at the wound margins was noted. Besides, significant degradation of epidermis and derma at the wound margins and areas of the dead cells have been found . The wounds of many mice from the scaffold group had a similar histological picture . At the same time, moderate infiltration of the wound bed, adipose tissue, and fascia of the entire wound area with inflammatory cells was observed in the majority of mice from the LSE group . The comparison of the examined groups has demonstrated the tendency to the reduction in the area of the inflammatory infiltrate of the wound bed in the mice of the LSE group . Hence, we may suggest an immunomodulating effect of LSE owing to the MSCs in its composition. Granulation tissue In the period of proliferation, maturation of the granulation tissue occurred in the mice of all groups. However, a number of essential differences in the formation of the granulation tissue was noted between the mice from the control and LSE groups. It is known that non-healing wounds are characterized by impairment in granulation tissue formation due to various reasons. For example, the overproduction of reactive oxygen species by neutrophils causes damage to extracellular matrix . TNF-a, inducing collagenase activity, is also supposed to inhibit normal scarring . In the control group, imitating a non-healing wound without treatment, a great difference was noted on the histological preparations between the thickness of dermis at the margins and the thickness of the tissueremodeling zone at the wound center in the majority of animals. The wound margins rose significantly above the wound bed on day 7 of the experiment . In the scaffold group, a similar picture was observed . On day 7, the thickness of the tissueremodeling zone in the wound bed in the majority of animals of the LSE group almost reached the wound margins . On day 14, the thickness of the tissue-remodeling zone in many animals of the control group was still significantly less than that of the dermis at the wound margins ; in the scaffold group, the thickness of the tissue-remodeling zone partially or fully reached the wound margins ; in the LSE group, the tissue-remodeling zone filled completely the wound bed by width and height in the majority of animals . The morphological middle of the wound in the mice of the control group on day 14 was characterized by a scanty amount of the tissue, while in the groups “scaffold” and “LSE” it was plentiful; the granulation tissue in the LSE group had well visualized fibers. To describe the effectiveness of treatment with LSE transplantation in respect of the effect on granulation, the term “smoothing coefficient” was introduced, which determines the ratio of the thickness of the tissue-remodeling zone to the thickness of the dermis of the wound margins. In the LSE group, the smoothing coefficient was statistically significantly higher than that in the groups “scaffold” and “control” on day 7, which spoke of the efficacy of LSE transplantation for removing the defect of the granulation tissue, which may result in cosmetic problems . On day 14, the smoothing coefficients in the groups “LSE” and “scaffold” were statistically significantly higher than in the control group, and at the same time did not differ statistically significantly from each other . Tissue condition at the wound margins. Hair follicles: death and regeneration On day 7, degenerative changes in the wound margin tissue continued in many animals of all groups. Due to the individual differences between the rates of regeneration, diverse variants of margin conditions were observed at a given point in time: moderate degradation interfollicular epidermis, dermis, and HF at the wound margins; mass death of these structures with subsequent formation of a scab; partial tissue regeneration. HF regeneration is known to occur in various ongoing processes: regeneration in microinjury, regeneration in case of the partial HF loss, wound-induced hair neogenesis typical for large full-thickness wounds in rodents , and also wound- induced anagen . In our work, it was not possible to identify the type of regeneration during which the HF restoration occurred in the process of ischemic chronic wound healing due to technical difficulties. In this connection, we use the term “HF regeneration” not categorizing its type. In our experiment, certain differences in the dynamics of degenerative processes in HF, their death and regeneration were noted in the animals of control and LSE groups. Thus, in many mice from the control group, there was noticed a mass HF death at the wound margins. The HF cells were characterized by marked degenerative changes and kariolysis . At the same time, in the LSE group, many HF located at the wound margins had either normal morphology or moderate signs of degeneration . An average amount of HF per a wound margin increased in the LSE group relative to the control group. It may speak of the fact that LSE contributed to the preservation of HF or accelerates the process of their regeneration . On day 14, HF of the majority of wounds had normal morphology; the number of HF in the groups did not differ statistically significantly. Thus, the rate of HF regeneration in all three groups became equal . Intensity of angiogenesis The quantitative analysis has revealed absence of statistically significant differences between the density of vessels in the wound bed in the mice of the three groups on day 7. Consequently, LSE does not influence angiogenesis . Scar remodeling and reepithelialization On day 21, the majority of the wounds in the animals of all groups were characterized by scar formation and reepithelialization . In many mice, fibers prevailed over the cell component in the scar tissue; besides, HF in the phase of a mature anagen were also observed. Therefore, the conclusion may be drawn on the full completion of the wound regeneration process. However, in some mice, the cell component prevailed over fibers in the wound bed, infiltration with inflammatory cells was noted, meaning that the process of wound healing is not finished. Both variants of wound healing were present in the mice of all groups. Thus, the rate of regeneration in the groups on day 21 became equal. In the period of proliferation, maturation of the granulation tissue occurred in the mice of all groups. However, a number of essential differences in the formation of the granulation tissue was noted between the mice from the control and LSE groups. It is known that non-healing wounds are characterized by impairment in granulation tissue formation due to various reasons. For example, the overproduction of reactive oxygen species by neutrophils causes damage to extracellular matrix . TNF-a, inducing collagenase activity, is also supposed to inhibit normal scarring . In the control group, imitating a non-healing wound without treatment, a great difference was noted on the histological preparations between the thickness of dermis at the margins and the thickness of the tissueremodeling zone at the wound center in the majority of animals. The wound margins rose significantly above the wound bed on day 7 of the experiment . In the scaffold group, a similar picture was observed . On day 7, the thickness of the tissueremodeling zone in the wound bed in the majority of animals of the LSE group almost reached the wound margins . On day 14, the thickness of the tissue-remodeling zone in many animals of the control group was still significantly less than that of the dermis at the wound margins ; in the scaffold group, the thickness of the tissue-remodeling zone partially or fully reached the wound margins ; in the LSE group, the tissue-remodeling zone filled completely the wound bed by width and height in the majority of animals . The morphological middle of the wound in the mice of the control group on day 14 was characterized by a scanty amount of the tissue, while in the groups “scaffold” and “LSE” it was plentiful; the granulation tissue in the LSE group had well visualized fibers. To describe the effectiveness of treatment with LSE transplantation in respect of the effect on granulation, the term “smoothing coefficient” was introduced, which determines the ratio of the thickness of the tissue-remodeling zone to the thickness of the dermis of the wound margins. In the LSE group, the smoothing coefficient was statistically significantly higher than that in the groups “scaffold” and “control” on day 7, which spoke of the efficacy of LSE transplantation for removing the defect of the granulation tissue, which may result in cosmetic problems . On day 14, the smoothing coefficients in the groups “LSE” and “scaffold” were statistically significantly higher than in the control group, and at the same time did not differ statistically significantly from each other . On day 7, degenerative changes in the wound margin tissue continued in many animals of all groups. Due to the individual differences between the rates of regeneration, diverse variants of margin conditions were observed at a given point in time: moderate degradation interfollicular epidermis, dermis, and HF at the wound margins; mass death of these structures with subsequent formation of a scab; partial tissue regeneration. HF regeneration is known to occur in various ongoing processes: regeneration in microinjury, regeneration in case of the partial HF loss, wound-induced hair neogenesis typical for large full-thickness wounds in rodents , and also wound- induced anagen . In our work, it was not possible to identify the type of regeneration during which the HF restoration occurred in the process of ischemic chronic wound healing due to technical difficulties. In this connection, we use the term “HF regeneration” not categorizing its type. In our experiment, certain differences in the dynamics of degenerative processes in HF, their death and regeneration were noted in the animals of control and LSE groups. Thus, in many mice from the control group, there was noticed a mass HF death at the wound margins. The HF cells were characterized by marked degenerative changes and kariolysis . At the same time, in the LSE group, many HF located at the wound margins had either normal morphology or moderate signs of degeneration . An average amount of HF per a wound margin increased in the LSE group relative to the control group. It may speak of the fact that LSE contributed to the preservation of HF or accelerates the process of their regeneration . On day 14, HF of the majority of wounds had normal morphology; the number of HF in the groups did not differ statistically significantly. Thus, the rate of HF regeneration in all three groups became equal . The quantitative analysis has revealed absence of statistically significant differences between the density of vessels in the wound bed in the mice of the three groups on day 7. Consequently, LSE does not influence angiogenesis . On day 21, the majority of the wounds in the animals of all groups were characterized by scar formation and reepithelialization . In many mice, fibers prevailed over the cell component in the scar tissue; besides, HF in the phase of a mature anagen were also observed. Therefore, the conclusion may be drawn on the full completion of the wound regeneration process. However, in some mice, the cell component prevailed over fibers in the wound bed, infiltration with inflammatory cells was noted, meaning that the process of wound healing is not finished. Both variants of wound healing were present in the mice of all groups. Thus, the rate of regeneration in the groups on day 21 became equal. Raster scanning optoacoustic mesoscopy is the latest non-invasive technology, which allows one to get high-resolution images. RSOM is scanning the skin area of interest with a transducer in parallel with illumination with a bundle of optic fibers. Optoacoustic waves generated in the tissue in response to the pulsed illumination are fixed; the acquired image is a 3D propagation of the absorbed light in the tissue. The reconstructed images demonstrate the distribution of melanin and hemoglobin in the epidermis and dermis making it possible to obtain images of microvascular skin system at the depth of 1-2 mm . Since RSOM is successfully used in the investigations of skin pathologies such as psoriasis and atopic eczema , this method was used in the present study to evaluate the dynamics of wound blood flow as a criterion of successful BCP transplantation. Measurements were performed on days 3, 5, 7, 10, 14, and 21. As a result of the experiment, it has been established that on day 10 the intensity of the signal at low frequencies in the LSE group exceeded statistically significantly that in the control group . This fact may be interpreted in favor of the idea that LSE contributes to angiogenesis during wound healing. In the process of examining the wound bed by means of RSOM, a number of technical difficulties have been experienced. Thus, the material, from which the scaffold for LSE cells was constructed, hindered the experiment and created a significant background. Besides, the transplant interfered with the observation of the underlying microvasculature condition, whereas the wound of mice without transplantation remained open, therefore, the comparison of the results obtained appeared to be incorrect. Not only the transplant but the plaster as well, removal of which created the risk of wound infection, prevented measurements. Besides, during measurements, the animal experiencing the load caused by blood loss had to remain under general anesthesia for a long time, which negatively influenced its state and often resulted in death. All this suggests the conclusion that RSOM cannot be recommended for the evaluation of the effectiveness of BCP transplantation during preclinical studies. Exploration of the regenerative processes has shown that the proposed model of the ischemic non-healing wound is suitable for preclinical studies of biomedical cell products. The evaluation of the parameters such as wound infiltration with inflammatory cells, the condition of the tissue-remodeling zone, a number of hair follicles, vascularization of the wound bed according to the appropriate stages of wound healing using histological and immunohistochemical methods is quite adequate and suitable for preclinical studies of biomedical cell products on the examined model. Additionally, a new indicator, a smoothing coefficient, expressed as a ratio of the thickness of the tissueremodeling zone to the thickness of the wound margin dermis, allowed to evaluate the degree of occupation of the wound bed with the developing tissue. Its high value in the LSE group means that transplantation of the biomedical cell products influences the properties of fibroblasts, hampers mechanical strain in the wound, and, consequently, prevents formation of a cosmetic defect. This indicator will make it possible to assess the condition of the tissue-remodeling zone in the wound bed with the transplantation and without it, and to predict thereby the effectiveness of biomedical cell products to eliminate cosmetic defects. The evaluation of the wound blood flow by raster scanning optoacoustic mesoscopy cannot be recommended for preclinical studies of biomedical cell products on the proposed model due to its specific features. |
Application of data-driven blended online-offline teaching in medicinal chemistry for pharmacy students: a randomized comparison | fb168c18-84cc-4d93-9385-ba64f25b9fa8 | 11232250 | Pharmacology[mh] | With the comprehensive integration of information technology in the field of education, traditional classrooms have evolved with various new models of online teaching, making the instructional process more dynamic and effective . Learners are encouraged to engage in online learning tasks and digital game-based activities, experiencing the joy of dealing with digital challenges, acquiring knowledge, and enhancing learning outcomes . These emerging technologies serve as crucial tools for information dissemination in online education, profoundly impacting the reform of medical school education . Blended learning, an instructional model combining digital online learning with face-to-face classroom teaching, has gradually drawn more attention with advancements in internet technology and education . The concept of blended learning was first introduced in the U.S. National Education Technology Plan. Since 2004, the United States has been actively adopted and innovated the blended learning approach, continually exploring advancements in technology and other aspects. Higher education has undergone a significant evolution in teaching paradigms. Following eras of experiential imitation teaching and computer-assisted instruction , the current landscape is increasingly characterized by the data-driven instruction . This approach incorporates next-generation information technologies such as the Internet of Things, big data, cloud computing, and mobile internet, involving systematic collection and analysis of both online and offline learning data to inform instructional improvements and elevate learning outcomes . Data-driven instruction is an innovative approach that harnesses diverse forms of data to shape and enhance teaching practices . This encompasses a spectrum from summative data, such as test scores, to formative data gauging student understanding through activities like discussions. Diverging from summative assessments primarily designed for assigning grades, formative assessments aim to refine teaching methods. The collection and analysis of both types of data empower educators to discern patterns and address shortcomings within their classrooms. Through the strategic utilization of these insights, educators can tailor instruction to individual student needs, pinpoint specific areas for improvement, and implement timely interventions to bolster overall student success. This proactive and personalized approach to teaching ensures that educators are equipped with the necessary information to optimize learning experiences and foster positive educational outcomes for every student. Medicinal chemistry is a comprehensive discipline focused on the discovery and invention of new drugs, the synthesis of chemical pharmaceuticals, elucidating the chemical properties of drugs, and researching the interaction patterns between drug molecules and cellular entities. Its scope encompasses the chemical structure, physicochemical properties, preparation methods, transport metabolism, structure-activity relationships, chemical mechanisms of drug action, as well as approaches and methods for the discovery of new drugs . With the continuous deepening of educational reforms, the teaching approach in medicinal chemistry has shifted from traditional methods towards a blended learning model . This approach seamlessly integrates online and offline teaching, leveraging the advantages of interactive communication in face-to-face classrooms while overcoming the limitations of traditional offline teaching, such as a singular format and limited content. Moreover, the use of online resources in the blended learning model has expanded the platform for medicinal chemistry education, greatly enriching the teaching content. It has not only sparked students’ interest in learning but also broadened their perspectives. Therefore, it is essential to explore the application of the blended learning model in biochemistry teaching. Teaching medicinal chemistry presents a unique challenge for pharmacy students, prompting a preliminary investigation into the data-driven blended online-offline teaching model’s implementation. This instructional approach amalgamates various teaching techniques with the objective of improving students’ learning outcomes and satisfaction, thereby offering additional teaching avenues for nurturing pharmaceutical talent. Participants This teaching reform experiment is open to all third-year Pharmacy students at Hebei North University. Before commencing the experiment, students were required to complete a short screening questionnaire to ensure they had the necessary resource for the experiments. The questionnaire asked the following five yes-or-no questions: (1) Do you have a stable internet connection? (2) Do you have access to an independent electronic device (laptop, tablet, or smartphone)? (3) Are you able to complete the online course? (4) Are you able to complete the exams and questionnaires? (5) Are you aware of this experiment and willing to participate? Students who answered “yes” to all questions were eligible for the study, while those who answered “no” to one or more questions were excluded. Sample size, grouping and blinding methods According to the sample size calculation method reported in the literature , the study required a minimum of 52 participants per group to achieve a significance level (α) of less than 0.05 and a power (1-β) of 80%. The participants were randomly divided into experimental group ( n = 59) and control group ( n = 59) using a simple randomization. Both groups were supervised by the same teaching team, including one professor and two assistants. The experiment was conducted using a single-blind method and the students were blinded after assignment to interventions. Study design We have employed a randomized controlled trail to assess the effectiveness of a data-driven blended online-offline (DDBOO) teaching model on a group of healthy volunteers. The DDBOO method was implemented in the experimental group, while the control group received the traditional lecture-based learning (LBL). Interventions The DDBOO model for medicinal chemistry course The DDBOO instructional process is structured into three phases: pre-class, in-class, and post-class. Through a seamless integration of synchronous and asynchronous learning, we have formulated a comprehensive DDBOO teaching approach, as illustrated in Fig. . Before class The teacher introduces the theme, characteristics and tasks of the lesson online, emphasizing the importance of the chapter and sparking student’s interest. Students engage in self-directed online learning tasks utilizing the SuperStarLearn software. They access and complete tasks at their own pace, view microlecture videos covering key topics, and subsequently undergo corresponding chapter tests. Following this, Problem-based learning (PBL) scenarios are introduced, encouraging collaborative teamwork to address PBL tasks. For those who do not complete assigned tasks, the learning alert system prompts them to do so. Teachers analyze online learning data, including the duration and frequency of student video views and chapter test accuracy, to identify common issues and pinpoint teaching challenges. In class During the class, teachers provide comprehensive explanations for commonly challenging issues and assess the learning outcomes through features such as quick response and in-class quizzes on the SuperStarLearn platform. Group discussions and collaborative thinking are encouraged to achieve a deeper understanding. Teachers also provide individualized guidance to address specific issues encountered by students during the learning process. By analyzing learning behaviors, such as participation in quick response and thematic discussions, as well as statistical data from in-class quizzes and assessments of group tasks, teachers can determine student engagement, personalized challenges, and learning effectiveness. This analysis enables teachers to intervene promptly, making adjustments to the teaching pace as necessary. Post class At the end of the class, the students completed a post-quiz and a questionnaire consisting of nine questions. Following the class, learning data retrieved from the SuperStarLearn Platform reports are used to distribute personalized assignments. By analyzing data such as assignment accuracy, teachers identified cognitive gaps and deviations among students. This information allows for targeted supplementation and correction in the subsequent class. LBL method for medicinal chemistry course In the control group, the same topics were presented through LBL. The lectures comprised two sessions, conducted once a week for 90 min each. During the class, the routine included the teacher explaining the learning objectives (5 min), delivering the content using PowerPoint slides (65 min), engaging in exercises (10 min), and participating in a class discussion or question-and-answer session (10 min). Students had the opportunity to participate in a question-and-answer session during the lecture, and discussions were encouraged if students wished to share their opinions or respond to their peers’ questions. Outcome measurements After obtaining informed consent, basic information about the participants, including age and gender, was collected. To evaluate students’ comprehension and application of knowledge, both groups underwent the same assessments, consisting of one pre-quiz and one post-quiz, each lasting 60 min and scored out of 100 points. Additionally, a questionnaire survey was administered at the end of the course to measure students’ self-perceived competence. The details of the questionnaire are presented in the Supplementary materials. This survey covered various aspects such as learning interest, targeted learning, motivation, self-learning skills, mastery of basic knowledge, teamwork abilities, problem-solving proficiency, and innovation capacity. Responses were rated using a 5-level Likert scale: 5 points for “strongly agreed,” 4 points for “agreed,” 3 points for “neutral,” 2 points for “disagreed,” and 1 point for " strongly disagreed.” Furthermore, a survey on satisfaction with the teaching mode was conducted, with responses categorized into four levels: “Very Satisfied,” “Satisfied,” “Neutral,” and “Dissatisfied.” In order to maintain impartial responses, both quizzes and questionnaires were conducted anonymously, mitigating any potential influence, whether positive or negative, on the students. Statistical analysis A chi-squared test (symbolically represented as χ 2 ) was employed to assess the discrepancy of count data. To compare two independent groups, the student t-test was utilized. Data were expressed as individual values and as mean ± standard deviation (SD). Statistical analysis was conducted using IBM SPSS statistics 20.0 software. The significance level (alpha) was set to 0.05, and p-values less than 0.05 were considered statistically significant. This teaching reform experiment is open to all third-year Pharmacy students at Hebei North University. Before commencing the experiment, students were required to complete a short screening questionnaire to ensure they had the necessary resource for the experiments. The questionnaire asked the following five yes-or-no questions: (1) Do you have a stable internet connection? (2) Do you have access to an independent electronic device (laptop, tablet, or smartphone)? (3) Are you able to complete the online course? (4) Are you able to complete the exams and questionnaires? (5) Are you aware of this experiment and willing to participate? Students who answered “yes” to all questions were eligible for the study, while those who answered “no” to one or more questions were excluded. According to the sample size calculation method reported in the literature , the study required a minimum of 52 participants per group to achieve a significance level (α) of less than 0.05 and a power (1-β) of 80%. The participants were randomly divided into experimental group ( n = 59) and control group ( n = 59) using a simple randomization. Both groups were supervised by the same teaching team, including one professor and two assistants. The experiment was conducted using a single-blind method and the students were blinded after assignment to interventions. We have employed a randomized controlled trail to assess the effectiveness of a data-driven blended online-offline (DDBOO) teaching model on a group of healthy volunteers. The DDBOO method was implemented in the experimental group, while the control group received the traditional lecture-based learning (LBL). The DDBOO model for medicinal chemistry course The DDBOO instructional process is structured into three phases: pre-class, in-class, and post-class. Through a seamless integration of synchronous and asynchronous learning, we have formulated a comprehensive DDBOO teaching approach, as illustrated in Fig. . Before class The teacher introduces the theme, characteristics and tasks of the lesson online, emphasizing the importance of the chapter and sparking student’s interest. Students engage in self-directed online learning tasks utilizing the SuperStarLearn software. They access and complete tasks at their own pace, view microlecture videos covering key topics, and subsequently undergo corresponding chapter tests. Following this, Problem-based learning (PBL) scenarios are introduced, encouraging collaborative teamwork to address PBL tasks. For those who do not complete assigned tasks, the learning alert system prompts them to do so. Teachers analyze online learning data, including the duration and frequency of student video views and chapter test accuracy, to identify common issues and pinpoint teaching challenges. In class During the class, teachers provide comprehensive explanations for commonly challenging issues and assess the learning outcomes through features such as quick response and in-class quizzes on the SuperStarLearn platform. Group discussions and collaborative thinking are encouraged to achieve a deeper understanding. Teachers also provide individualized guidance to address specific issues encountered by students during the learning process. By analyzing learning behaviors, such as participation in quick response and thematic discussions, as well as statistical data from in-class quizzes and assessments of group tasks, teachers can determine student engagement, personalized challenges, and learning effectiveness. This analysis enables teachers to intervene promptly, making adjustments to the teaching pace as necessary. Post class At the end of the class, the students completed a post-quiz and a questionnaire consisting of nine questions. Following the class, learning data retrieved from the SuperStarLearn Platform reports are used to distribute personalized assignments. By analyzing data such as assignment accuracy, teachers identified cognitive gaps and deviations among students. This information allows for targeted supplementation and correction in the subsequent class. LBL method for medicinal chemistry course In the control group, the same topics were presented through LBL. The lectures comprised two sessions, conducted once a week for 90 min each. During the class, the routine included the teacher explaining the learning objectives (5 min), delivering the content using PowerPoint slides (65 min), engaging in exercises (10 min), and participating in a class discussion or question-and-answer session (10 min). Students had the opportunity to participate in a question-and-answer session during the lecture, and discussions were encouraged if students wished to share their opinions or respond to their peers’ questions. Outcome measurements After obtaining informed consent, basic information about the participants, including age and gender, was collected. To evaluate students’ comprehension and application of knowledge, both groups underwent the same assessments, consisting of one pre-quiz and one post-quiz, each lasting 60 min and scored out of 100 points. Additionally, a questionnaire survey was administered at the end of the course to measure students’ self-perceived competence. The details of the questionnaire are presented in the Supplementary materials. This survey covered various aspects such as learning interest, targeted learning, motivation, self-learning skills, mastery of basic knowledge, teamwork abilities, problem-solving proficiency, and innovation capacity. Responses were rated using a 5-level Likert scale: 5 points for “strongly agreed,” 4 points for “agreed,” 3 points for “neutral,” 2 points for “disagreed,” and 1 point for " strongly disagreed.” Furthermore, a survey on satisfaction with the teaching mode was conducted, with responses categorized into four levels: “Very Satisfied,” “Satisfied,” “Neutral,” and “Dissatisfied.” In order to maintain impartial responses, both quizzes and questionnaires were conducted anonymously, mitigating any potential influence, whether positive or negative, on the students. The DDBOO instructional process is structured into three phases: pre-class, in-class, and post-class. Through a seamless integration of synchronous and asynchronous learning, we have formulated a comprehensive DDBOO teaching approach, as illustrated in Fig. . Before class The teacher introduces the theme, characteristics and tasks of the lesson online, emphasizing the importance of the chapter and sparking student’s interest. Students engage in self-directed online learning tasks utilizing the SuperStarLearn software. They access and complete tasks at their own pace, view microlecture videos covering key topics, and subsequently undergo corresponding chapter tests. Following this, Problem-based learning (PBL) scenarios are introduced, encouraging collaborative teamwork to address PBL tasks. For those who do not complete assigned tasks, the learning alert system prompts them to do so. Teachers analyze online learning data, including the duration and frequency of student video views and chapter test accuracy, to identify common issues and pinpoint teaching challenges. In class During the class, teachers provide comprehensive explanations for commonly challenging issues and assess the learning outcomes through features such as quick response and in-class quizzes on the SuperStarLearn platform. Group discussions and collaborative thinking are encouraged to achieve a deeper understanding. Teachers also provide individualized guidance to address specific issues encountered by students during the learning process. By analyzing learning behaviors, such as participation in quick response and thematic discussions, as well as statistical data from in-class quizzes and assessments of group tasks, teachers can determine student engagement, personalized challenges, and learning effectiveness. This analysis enables teachers to intervene promptly, making adjustments to the teaching pace as necessary. Post class At the end of the class, the students completed a post-quiz and a questionnaire consisting of nine questions. Following the class, learning data retrieved from the SuperStarLearn Platform reports are used to distribute personalized assignments. By analyzing data such as assignment accuracy, teachers identified cognitive gaps and deviations among students. This information allows for targeted supplementation and correction in the subsequent class. The teacher introduces the theme, characteristics and tasks of the lesson online, emphasizing the importance of the chapter and sparking student’s interest. Students engage in self-directed online learning tasks utilizing the SuperStarLearn software. They access and complete tasks at their own pace, view microlecture videos covering key topics, and subsequently undergo corresponding chapter tests. Following this, Problem-based learning (PBL) scenarios are introduced, encouraging collaborative teamwork to address PBL tasks. For those who do not complete assigned tasks, the learning alert system prompts them to do so. Teachers analyze online learning data, including the duration and frequency of student video views and chapter test accuracy, to identify common issues and pinpoint teaching challenges. During the class, teachers provide comprehensive explanations for commonly challenging issues and assess the learning outcomes through features such as quick response and in-class quizzes on the SuperStarLearn platform. Group discussions and collaborative thinking are encouraged to achieve a deeper understanding. Teachers also provide individualized guidance to address specific issues encountered by students during the learning process. By analyzing learning behaviors, such as participation in quick response and thematic discussions, as well as statistical data from in-class quizzes and assessments of group tasks, teachers can determine student engagement, personalized challenges, and learning effectiveness. This analysis enables teachers to intervene promptly, making adjustments to the teaching pace as necessary. At the end of the class, the students completed a post-quiz and a questionnaire consisting of nine questions. Following the class, learning data retrieved from the SuperStarLearn Platform reports are used to distribute personalized assignments. By analyzing data such as assignment accuracy, teachers identified cognitive gaps and deviations among students. This information allows for targeted supplementation and correction in the subsequent class. In the control group, the same topics were presented through LBL. The lectures comprised two sessions, conducted once a week for 90 min each. During the class, the routine included the teacher explaining the learning objectives (5 min), delivering the content using PowerPoint slides (65 min), engaging in exercises (10 min), and participating in a class discussion or question-and-answer session (10 min). Students had the opportunity to participate in a question-and-answer session during the lecture, and discussions were encouraged if students wished to share their opinions or respond to their peers’ questions. After obtaining informed consent, basic information about the participants, including age and gender, was collected. To evaluate students’ comprehension and application of knowledge, both groups underwent the same assessments, consisting of one pre-quiz and one post-quiz, each lasting 60 min and scored out of 100 points. Additionally, a questionnaire survey was administered at the end of the course to measure students’ self-perceived competence. The details of the questionnaire are presented in the Supplementary materials. This survey covered various aspects such as learning interest, targeted learning, motivation, self-learning skills, mastery of basic knowledge, teamwork abilities, problem-solving proficiency, and innovation capacity. Responses were rated using a 5-level Likert scale: 5 points for “strongly agreed,” 4 points for “agreed,” 3 points for “neutral,” 2 points for “disagreed,” and 1 point for " strongly disagreed.” Furthermore, a survey on satisfaction with the teaching mode was conducted, with responses categorized into four levels: “Very Satisfied,” “Satisfied,” “Neutral,” and “Dissatisfied.” In order to maintain impartial responses, both quizzes and questionnaires were conducted anonymously, mitigating any potential influence, whether positive or negative, on the students. A chi-squared test (symbolically represented as χ 2 ) was employed to assess the discrepancy of count data. To compare two independent groups, the student t-test was utilized. Data were expressed as individual values and as mean ± standard deviation (SD). Statistical analysis was conducted using IBM SPSS statistics 20.0 software. The significance level (alpha) was set to 0.05, and p-values less than 0.05 were considered statistically significant. Baseline characteristics of the students From September 2020 to January 2021, a total of 118 students actively participated in the teaching experiment. Among them, 46 students were male (38.98%), and 72 students were female (61.02%). The average age of the participants was 20.5 ± 0.7 years. These students were randomly assigned to two groups: the DDBOO group ( n = 59) and the traditional LBL group ( n = 59). Notably, all students successfully completed the entire teaching process, including quizzes and questionnaires, and there were no dropouts during the study period. A comprehensive analysis of demographic data between the DDBOO group and the LBL group is presented in Table . The results revealed no significant differences between the two groups in terms of gender ( P = 0.45), age ( P = 0.673), and pre-quiz scores related to basic knowledge ( P = 0.822). Comparison of the post-quiz test scores between two groups As illustrated in Fig. , the statistical analysis of the box plots depicting final exam scores reveals that the average scores of the DDBOO group are higher than those of the LBL group( T = 3.742, P < 0.001). Moreover, there is a reduction in the number of low-scoring students, suggesting a better mastery of professional knowledge among students in the DDBOO group. Comparison of self-perceived competence and satisfaction between the two groups A comprehensive evaluation of the teaching effectiveness between the DDBOO teaching method and the traditional LBL teaching approach was conducted through a post-teaching questionnaire survey. All questionnaires distributed for the survey were successfully collected and proved to be valid. According to Table , the DDBOO group outperformed the LBL group in various aspects, including learning interest, learning motivation, self-learning skill, mastery of basic knowledge, teamwork skills, problem-solving ability, and innovation ability, demonstrating statistically significant differences ( P < 0.05). While the score for learning targeted was higher in the DDBOO group compared to the LBL group, this difference was not statistically significant ( P > 0.05). Furthermore, as indicated in Table , the level of satisfaction within the DDBOO group surpassed that of the traditional LBL group ( P = 0.011). Utilizing SPSS software for data analysis, we conducted a correlation analysis to further examine the relationship between students’ online and offline learning behaviors and their final exam scores under the DDBOO teaching model. The correlation analysis results are depicted in Fig. . From Fig. , it is evident that the online test scores demonstrates a positively correlation with final exam scores ( r = 0.52), signifying a noteworthy impact of students’ performance in self-directed learning on overall learning quality. However, the correlations between visitation frequency, online video viewing duration, assignment scores and final scores are not significant. This may be attributed to some students engaging in online activities solely for the purpose of improving their scores. In addition to carefully designing online teaching activities, teachers need to appropriately assign weights to evaluation criteria for online self-directed learning, guiding students towards effective independent learning practices. Regarding offline learning behaviors, course interaction, PBL implementation, and classroom discussions exhibit higher correlations with final exam scores, with correlation coefficients of 0.53, 0.48 and 0.43, respectively. This suggests that classroom interactions and presentation discussions contribute to deepening students’ understanding and mastery of the learned content. The feedback results from the instructional survey indicate that the DDBOO teaching approach has achieved learning outcomes, as depicted in Fig. . 91.5% of students believe that data-driven blended teaching has enhanced their study habit, transformed cognitive patterns, and bolstered subjective initiative. Additionally, 94.9% of students express that classroom interaction is more dynamic, encouraging them to confidently pose questions and articulate their viewpoints. The majority of students acknowledge the significant impact of data-driven blended teaching on overall skill enhancement, particularly in terms of autonomous learning, problem analysis, and teamwork abilities. From September 2020 to January 2021, a total of 118 students actively participated in the teaching experiment. Among them, 46 students were male (38.98%), and 72 students were female (61.02%). The average age of the participants was 20.5 ± 0.7 years. These students were randomly assigned to two groups: the DDBOO group ( n = 59) and the traditional LBL group ( n = 59). Notably, all students successfully completed the entire teaching process, including quizzes and questionnaires, and there were no dropouts during the study period. A comprehensive analysis of demographic data between the DDBOO group and the LBL group is presented in Table . The results revealed no significant differences between the two groups in terms of gender ( P = 0.45), age ( P = 0.673), and pre-quiz scores related to basic knowledge ( P = 0.822). As illustrated in Fig. , the statistical analysis of the box plots depicting final exam scores reveals that the average scores of the DDBOO group are higher than those of the LBL group( T = 3.742, P < 0.001). Moreover, there is a reduction in the number of low-scoring students, suggesting a better mastery of professional knowledge among students in the DDBOO group. A comprehensive evaluation of the teaching effectiveness between the DDBOO teaching method and the traditional LBL teaching approach was conducted through a post-teaching questionnaire survey. All questionnaires distributed for the survey were successfully collected and proved to be valid. According to Table , the DDBOO group outperformed the LBL group in various aspects, including learning interest, learning motivation, self-learning skill, mastery of basic knowledge, teamwork skills, problem-solving ability, and innovation ability, demonstrating statistically significant differences ( P < 0.05). While the score for learning targeted was higher in the DDBOO group compared to the LBL group, this difference was not statistically significant ( P > 0.05). Furthermore, as indicated in Table , the level of satisfaction within the DDBOO group surpassed that of the traditional LBL group ( P = 0.011). Utilizing SPSS software for data analysis, we conducted a correlation analysis to further examine the relationship between students’ online and offline learning behaviors and their final exam scores under the DDBOO teaching model. The correlation analysis results are depicted in Fig. . From Fig. , it is evident that the online test scores demonstrates a positively correlation with final exam scores ( r = 0.52), signifying a noteworthy impact of students’ performance in self-directed learning on overall learning quality. However, the correlations between visitation frequency, online video viewing duration, assignment scores and final scores are not significant. This may be attributed to some students engaging in online activities solely for the purpose of improving their scores. In addition to carefully designing online teaching activities, teachers need to appropriately assign weights to evaluation criteria for online self-directed learning, guiding students towards effective independent learning practices. Regarding offline learning behaviors, course interaction, PBL implementation, and classroom discussions exhibit higher correlations with final exam scores, with correlation coefficients of 0.53, 0.48 and 0.43, respectively. This suggests that classroom interactions and presentation discussions contribute to deepening students’ understanding and mastery of the learned content. The feedback results from the instructional survey indicate that the DDBOO teaching approach has achieved learning outcomes, as depicted in Fig. . 91.5% of students believe that data-driven blended teaching has enhanced their study habit, transformed cognitive patterns, and bolstered subjective initiative. Additionally, 94.9% of students express that classroom interaction is more dynamic, encouraging them to confidently pose questions and articulate their viewpoints. The majority of students acknowledge the significant impact of data-driven blended teaching on overall skill enhancement, particularly in terms of autonomous learning, problem analysis, and teamwork abilities. This study investigated the effectiveness of a DDBOO teaching approach in medicinal chemistry for pharmacy students. The DDBOO model, integrating online resources with traditional classroom instruction, yielded significant improvements in students’ comprehension application of complex pharmaceutical concepts, and self-perceived competence as measured by post-course surveys. These findings not only highlight the effectiveness of DDBOO model, but also align with existing research on blended learning’s benefits, including flexibility, diverse resources, and enhanced student engagement . Furthermore, DDBOO facilitates real-time feedback, adaptability, and a shift from teacher-centered learning to active problem-solving and collaboration. Data-driven assessments further empower instructors by allowing for early intervention and ongoing refinement of teaching methods based on student performance data . This combined approach paves the way for optimizing student learning and engagement in medicinal chemistry education. Blended online-offline teaching addresses pharmacy students’ need for practical skills by freeing up classroom time for hands-on practice . Online platforms enable flexible, self-paced learning of theoretical knowledge outside of class, maximizing learning efficiency. This organic integration of theory and practice fosters the development of comprehensive abilities, including operational skills, critical thinking, and innovation. Students appreciate the flexibility and diverse resources offered by the blended approach, leading to increased engagement and enjoyment of the learning process. The teacher plays a pivotal role in blended online-offline teaching for medicinal chemistry. They design a curriculum integrating both online and offline components, selecting materials tailored to medicinal chemistry education. Utilizing online platforms and resources, teachers engage students in various activities such as discussions and virtual experiments, guiding them through the online learning environment. Moreover, teachers adopt a data-driven approach, collecting and analyzing student performance data to provide individualized support and targeted interventions. Continuous feedback on student performance informs the adaptation of teaching strategies to meet diverse learning needs. Offline sessions, including laboratory work and group discussions, complement online components to offer a comprehensive learning experience. Through these efforts, teachers create a supportive and collaborative environment, fostering student interaction and critical thinking . Overall, the teacher acts as a facilitator, guide, and analyst, utilizing data-driven insights to optimize the blended online-offline teaching approach in medicinal chemistry. Limitations. Despite its advantages, the DDBOO teaching model also presents several limitations. One notable limitation is the potential for unequal access to technology and online resources among students, which may widen existing educational inequalities. Additionally, the success of the DDBOO model relies heavily on effective technology integration and teacher training, which may pose challenges for institutions with limited resources or infrastructure. Moreover, the model’s effectiveness may vary depending on factors such as student motivation, prior knowledge, and learning preferences, highlighting the need for further research to better understand its impact across different contexts and populations. Overall, while the DDBOO teaching model offers numerous benefits for enhancing student learning and engagement, careful consideration of its limitations is essential for its successful implementation and long-term sustainability. In conclusion, the application of the data-driven blended online-offline teaching model in medicinal chemistry for pharmacy students has demonstrated promising results in enhancing learning outcomes and satisfaction levels. This innovative approach, guided by big data technology, provides a tailored and personalized learning experience that addressed individual student needs. The findings of this study underscore the potential of integrating advanced teaching methodologies with traditional classroom instruction to optimize the educational experience in pharmacy education. Future research should explore the applicability of this blended teaching model to other disciplines, such as clinical medicine, nursing, and public health. Below is the link to the electronic supplementary material. Supplementary Material 1 |
Evaluation of the roughness, color match, and color stability of two monochromatic composite resins: a randomized controlled laboratory study | 7c65fe5c-f537-4ce6-bd29-ae4d2f7d2774 | 11847386 | Dentistry[mh] | Aesthetic and restorative dentistry aims to replace lost and/or compromised dental structures with restorative materials that have physical, biological and functional properties similar to those presented by natural teeth . Natural teeth are polychromatic structures composed of organic and inorganic structures that present distinct and complementary optical characteristics , making it difficult to choose the color of the restorative material. Many restorative techniques are performed in an attempt to copy all the properties inherent to natural teeth and allow for imperceptible restorations. The use of different colors and opacities of resins in the layering technique intends to camouflage the restorations . To facilitate and reduce the number of resin kits, it was created, in 2019, the monochromatic resins Omnichroma (Tokuyama Dental, Tokyo, Japan) and Vittra (APS Unique FGM, Joinville, SC, Brazil). These one-shade resins can copy the color of the dental substrate that will be restored, right after the material’s light-curing, so that they have high translucency and are known to produce a “chameleon effect” . The smart technology used can make the composite itself weakens or amplifies specific wavelengths of light to blend with adjacent tooth color . This material is considerably versatile, as it can be used in restorations in anterior and posterior teeth and can restore colors from A1 to D4 (Vita ® Classical scale), excluding the need for layering and color selection, allowing longer clinical work time under the light of the reflector . They also present a good response to polishing and reduction of the amount of resins arranged in the dental office, reducing the cost of materials . However, when there is no lingual or palatal wall at the time of the restorative procedure, these materials will transmit oral darkness . In this clinical situation, the monochromatic resin is indicated as the second layer, after the reconstruction of the lost wall by using opacifying or dentin resin . The evaluation of composite resins involves understanding the interplay between thermocycling, color stability, and surface roughness . Thermocycling is a method used to simulate the thermal stresses that materials undergo in the oral environment and the surface roughness is the value of the texture on the surface of a material . Both the spectrophotometers and the CIEDE2000 formula have been used to objectively measure color changes before and after thermocycling . Clinical and laboratorial studies have shown positive results for these newer composites when evaluating optical properties, encompassing translucency, opalescence, and their potential for color adjustment . The Omnichroma was the first genuinely developed one-shade composite resin . Since then, other one-shade composites have been developed with differences in chemical composition and pricing. Studies on these new composites are necessary to understand the behavior of the material and to evaluate the differences among the one-shade composites commercially available. No study was found in the literature that evaluated and compared the laboratory performance of two one-shade composite resins. Thus, the objective of the present study was to evaluate the efficacy of monochromatic resins in capturing the adjacent color in the different shades of the teeth in different types of cavities, before and after thermocycling. In addition, the surface roughness of the restorations was evaluated. Sample This is a randomized controlled laboratory trial, approved by the Research Ethics Committee of the Federal University of the Vales do Jequitinhonha e Mucuri (UFVJM) (protocol number 5,960,089). All teeth were sterilized in an autoclave for the protection of the researchers and kept in distilled water during the study. To determine the sample size, the calculation for proportion estimation adjusted for finite population was used. Calculations with a significance level of 5%, with a power of 80%, a color compatibility ratio of 84% and a margin of error of 9% determined a minimum of 57 restorations per group. Mandibular central incisors and mandibular lateral incisors that belonged to the human teeth bank of the UFVJM School of Dentistry were included. Incisors with carious cavities, fractured teeth, restored teeth, or teeth with defective enamel were excluded. After the 40 teeth in the study were sterilized, Arabic numerals from 1 to 40 were inscribed on their roots and pigmented with blue nail polish (Fig. ). The study had two groups: Group 1 ( n = 20) - Omnichroma composite resin (Tokuyama Dental, Tokyo, Japan) and Group 2 ( n = 20) - Vittra APS Unique composite resin (FGM, Joinville, SC, Brazil). Randomization, allocation concealment, and masking procedures The resin to be applied to each tooth was randomized by an independent researcher (JCRG) through a simple draw, and the result was kept in an opaque envelope, which was revealed at the time of restoration. The operators (ICS, SSO) and evaluator (DWDO) were masked as to the resin used. The resins were dispensed into an opaque dappen dish and delivered to the operator at the time of resin insertion. The evaluator was unaware of which resin was applied to each tooth. Preparation of specimens Two researchers (ICS, SSO) performed the standardized classes III, IV and V cavity preparations (Fig. A) using a high-speed handpiece under refrigeration and with 1090 and 1017 diamond tip drills on all tooth. The drills were discarded and replaced after 5 cavities had been made. Class III – Using a 1017 diamond tip drill, the cavity was made from 3.0 mm from the incisomesial edge, shaped as a half cane, with 2.1 mm in height, 2.1 mm in axial depth and VL, maintaining the lingual wall. At every cavosurface angle a bevel was made with 0.1 mm at 45º with a flame-shaped diamond tip drill. Class IV – Prior to preparation, the crown/lingual surface was copied in condensation silicone, forming a mold for later reconstruction. Using a cylindrical round diamond tip drill (1090), the incisodistal angle was removed with the proportions of 0.8 mm MD and 0.4 mm incisocervical. At every cavosurface angle, the bevel was made with 1 mm at 45º with a flame-shaped diamond tip drill. Condensation silicone matrices were made prior to the preparation of the cavities in order to assist in the restoration of the lingual and incisal anatomy of Black’s class IV restorations. Class V – Using a 1017 diamond tip drill, the cavity was made at the root crown junction, in a spherical shape with 2.1 mm in height, 1.0 mm of depth of the axial wall that which should be parallel to the buccal surface (convex) and 2.1 mm of MD width. At every cavosurface angle in enamel, the bevel was made with 1.0 mm at 45º with a flame-shaped diamond tip drill. The three cavities of the same tooth received the same composite resin. Initially, the opperator performed selective conditioning with 35% phosphoric acid Potenza Attacco (PHS do Brasil, Joinville, Brazil) on enamel for 30 s (Fig. B). Then, the samples were washed thoroughly and dried with absorbent paper. Once this was done, the Ambar Universal adhesive system (FGM, Joinville, Brazil) was applied with a disposable brush and, after waiting 30 s for the solvent to evaporate, it was then polymerized for 20 s (Fig. C). The resins were inserted into each cavity in increments of 1.5 mm, connecting two opposite walls and immediate light polymerization for 40 s, operated in continuous mode (LED Photopolymerizer D, Gnatus, São Paulo, Brazil), in each increment. This procedure was repeated until the cavity was filled and the restoration was completed, reestablishing the anatomical contour of the cervical third of the buccal surface (Fig. F). For all class IV restorations, a previously prepared silicone matrix helped with the lingual and incisal anatomy (Fig. D). On the matrix in both groups, the Blocker Ominichroma resin (Tokuyama Dental, Tokyo, Japan) was applied with a thickness of 0.5 mm and taken to the cavity and, after accommodation, it was polymerized for 40 s (Fig. E). After 7 days, the restorations were polished (SEF) with Potenza Specchi diamond paste (PHS do Brasil, Joinville, Brazil) and polishing discs with decreasing grain sizes and a diamond master felt disc (FGM, Joinville, Brazil) (Fig. G). Table lists the materials used to create the restorations and their respective compositions. Thermocycling process The 40 restored teeth were kept in distilled water at 37 °C and replaced every day for 1 week. In them, 2000 thermal cycles (CT) were carried out in the thermocycling machine (Ética Equipamentos Cientifiques S/A, São Paulo, SP, Brazil) which consisted of water baths at two different temperatures (5 °C and 55 °C), for 30 s each and a transfer time of 5 s. After each cycle, the specimens were stored again in water at 37 °C . Visual color evaluation Restorations were evaluated using modified FDI and USPHS criteria , before and after aging, by a blinded researcher (DWDO). The USPHS (shape, margin and color) criteria were adopted as described : Alpha (A)- The restoration matches the color and translucency of the adjacent dental tissues; Bravo (B)- The restoration does not acceptably match the color and translucency of the adjacent dental tissues; Charlie (C)- The restoration does not match the color and translucency of the adjacent dental tissues to an unacceptable extent. The modified FDI criteria were evaluated according to Table . For intra-examiner calibration, one examiner (DWDO) was trained to USPHS and FDI criteria using natural teeth extracted and restored with composite resin. The same restorations were evaluated by the same examiner at two different times, 14 days apart. Kappa agreement was K = 0.920 for USPHS and K = 0.899 for FDI. Surface roughness Class V restorations were submitted to three roughness readings, from which the mean roughness was measured. The instrument was calibrated with the standard specimen, with a roughness of 1.80 μm in Ra. Ra analysis means the calculation of the arithmetic mean between peaks and valleys of a given surface. The teeth were placed in a container filled with modeling clay for stabilization during the three roughness measurements. The measurements were obtained using a Digital Surface Roughness Tester - SRT-6210 (Merit-mi, Qingdao, China) with a cutoff of 0.25 mm before and after the aging process. The teeth were placed in a high-density EVA container and maintained in position using a tweezer for stabilization and standardization during the color measurements (Fig. ). Objective color evaluation Before and after aging, the teeth and restorations were evaluated using the SP62S spectrophotometer and the QA-Master I software (X-Rite Incorporated - Neu-Isenburg, Germany). In each restoration, three consecutive measurements were taken with the tip of the meter in the center of the restoration. The tooth color was read in the central region of the buccal surface, with three measurements being taken. The tooth color parameter and each restoration was determined by the arithmetic mean of the 3 measurements. Color measurements were taken using the CIE L* a* b* color system . The total difference between booth tooth and restoration color (ΔE*) was calculated by the following formula: ΔE* = [ΔL 2 + Δa 2 + Δb2 ] 1/2 . The following cut-off points were used for ΔE analysis: ΔE ≤ 1 – color difference that cannot be detected clinically; 1 < ΔE ≤ 3.7 – color difference clinically detectable by all observers and clinically acceptable; ΔE > 3.7 – clinically unacceptable color difference and poor adaptation . Statistical analysis Statistical analyses were performed using the SPSS ® for Windows ® (Statistical Package for the Social Sciences, IBM Corp., United States) statistical package in version 26. Exploratory analyses of the data provided frequencies, means, and standard deviations. The assessment of normality was verified by the Shapiro-Wilk test. Intragroup comparison of quantitative data was performed using the paired T-test, and intergroup analysis was performed using the independent T-test. Categorical variables were submitted to the Chi-Square test and Fischer’s exact test. The confidence interval used was 95%. The level of significance adopted was 5%. The magnitude of effect was analyzed for the difference of the ΔE between groups. It was used the Cohen’s d to calculate the size effect. The results were categorized as having a small (0.20 < d), medium (0.21 < d < 0.50), or large (d > 0.51) effect. This is a randomized controlled laboratory trial, approved by the Research Ethics Committee of the Federal University of the Vales do Jequitinhonha e Mucuri (UFVJM) (protocol number 5,960,089). All teeth were sterilized in an autoclave for the protection of the researchers and kept in distilled water during the study. To determine the sample size, the calculation for proportion estimation adjusted for finite population was used. Calculations with a significance level of 5%, with a power of 80%, a color compatibility ratio of 84% and a margin of error of 9% determined a minimum of 57 restorations per group. Mandibular central incisors and mandibular lateral incisors that belonged to the human teeth bank of the UFVJM School of Dentistry were included. Incisors with carious cavities, fractured teeth, restored teeth, or teeth with defective enamel were excluded. After the 40 teeth in the study were sterilized, Arabic numerals from 1 to 40 were inscribed on their roots and pigmented with blue nail polish (Fig. ). The study had two groups: Group 1 ( n = 20) - Omnichroma composite resin (Tokuyama Dental, Tokyo, Japan) and Group 2 ( n = 20) - Vittra APS Unique composite resin (FGM, Joinville, SC, Brazil). The resin to be applied to each tooth was randomized by an independent researcher (JCRG) through a simple draw, and the result was kept in an opaque envelope, which was revealed at the time of restoration. The operators (ICS, SSO) and evaluator (DWDO) were masked as to the resin used. The resins were dispensed into an opaque dappen dish and delivered to the operator at the time of resin insertion. The evaluator was unaware of which resin was applied to each tooth. Two researchers (ICS, SSO) performed the standardized classes III, IV and V cavity preparations (Fig. A) using a high-speed handpiece under refrigeration and with 1090 and 1017 diamond tip drills on all tooth. The drills were discarded and replaced after 5 cavities had been made. Class III – Using a 1017 diamond tip drill, the cavity was made from 3.0 mm from the incisomesial edge, shaped as a half cane, with 2.1 mm in height, 2.1 mm in axial depth and VL, maintaining the lingual wall. At every cavosurface angle a bevel was made with 0.1 mm at 45º with a flame-shaped diamond tip drill. Class IV – Prior to preparation, the crown/lingual surface was copied in condensation silicone, forming a mold for later reconstruction. Using a cylindrical round diamond tip drill (1090), the incisodistal angle was removed with the proportions of 0.8 mm MD and 0.4 mm incisocervical. At every cavosurface angle, the bevel was made with 1 mm at 45º with a flame-shaped diamond tip drill. Condensation silicone matrices were made prior to the preparation of the cavities in order to assist in the restoration of the lingual and incisal anatomy of Black’s class IV restorations. Class V – Using a 1017 diamond tip drill, the cavity was made at the root crown junction, in a spherical shape with 2.1 mm in height, 1.0 mm of depth of the axial wall that which should be parallel to the buccal surface (convex) and 2.1 mm of MD width. At every cavosurface angle in enamel, the bevel was made with 1.0 mm at 45º with a flame-shaped diamond tip drill. The three cavities of the same tooth received the same composite resin. Initially, the opperator performed selective conditioning with 35% phosphoric acid Potenza Attacco (PHS do Brasil, Joinville, Brazil) on enamel for 30 s (Fig. B). Then, the samples were washed thoroughly and dried with absorbent paper. Once this was done, the Ambar Universal adhesive system (FGM, Joinville, Brazil) was applied with a disposable brush and, after waiting 30 s for the solvent to evaporate, it was then polymerized for 20 s (Fig. C). The resins were inserted into each cavity in increments of 1.5 mm, connecting two opposite walls and immediate light polymerization for 40 s, operated in continuous mode (LED Photopolymerizer D, Gnatus, São Paulo, Brazil), in each increment. This procedure was repeated until the cavity was filled and the restoration was completed, reestablishing the anatomical contour of the cervical third of the buccal surface (Fig. F). For all class IV restorations, a previously prepared silicone matrix helped with the lingual and incisal anatomy (Fig. D). On the matrix in both groups, the Blocker Ominichroma resin (Tokuyama Dental, Tokyo, Japan) was applied with a thickness of 0.5 mm and taken to the cavity and, after accommodation, it was polymerized for 40 s (Fig. E). After 7 days, the restorations were polished (SEF) with Potenza Specchi diamond paste (PHS do Brasil, Joinville, Brazil) and polishing discs with decreasing grain sizes and a diamond master felt disc (FGM, Joinville, Brazil) (Fig. G). Table lists the materials used to create the restorations and their respective compositions. The 40 restored teeth were kept in distilled water at 37 °C and replaced every day for 1 week. In them, 2000 thermal cycles (CT) were carried out in the thermocycling machine (Ética Equipamentos Cientifiques S/A, São Paulo, SP, Brazil) which consisted of water baths at two different temperatures (5 °C and 55 °C), for 30 s each and a transfer time of 5 s. After each cycle, the specimens were stored again in water at 37 °C . Restorations were evaluated using modified FDI and USPHS criteria , before and after aging, by a blinded researcher (DWDO). The USPHS (shape, margin and color) criteria were adopted as described : Alpha (A)- The restoration matches the color and translucency of the adjacent dental tissues; Bravo (B)- The restoration does not acceptably match the color and translucency of the adjacent dental tissues; Charlie (C)- The restoration does not match the color and translucency of the adjacent dental tissues to an unacceptable extent. The modified FDI criteria were evaluated according to Table . For intra-examiner calibration, one examiner (DWDO) was trained to USPHS and FDI criteria using natural teeth extracted and restored with composite resin. The same restorations were evaluated by the same examiner at two different times, 14 days apart. Kappa agreement was K = 0.920 for USPHS and K = 0.899 for FDI. Class V restorations were submitted to three roughness readings, from which the mean roughness was measured. The instrument was calibrated with the standard specimen, with a roughness of 1.80 μm in Ra. Ra analysis means the calculation of the arithmetic mean between peaks and valleys of a given surface. The teeth were placed in a container filled with modeling clay for stabilization during the three roughness measurements. The measurements were obtained using a Digital Surface Roughness Tester - SRT-6210 (Merit-mi, Qingdao, China) with a cutoff of 0.25 mm before and after the aging process. The teeth were placed in a high-density EVA container and maintained in position using a tweezer for stabilization and standardization during the color measurements (Fig. ). Before and after aging, the teeth and restorations were evaluated using the SP62S spectrophotometer and the QA-Master I software (X-Rite Incorporated - Neu-Isenburg, Germany). In each restoration, three consecutive measurements were taken with the tip of the meter in the center of the restoration. The tooth color was read in the central region of the buccal surface, with three measurements being taken. The tooth color parameter and each restoration was determined by the arithmetic mean of the 3 measurements. Color measurements were taken using the CIE L* a* b* color system . The total difference between booth tooth and restoration color (ΔE*) was calculated by the following formula: ΔE* = [ΔL 2 + Δa 2 + Δb2 ] 1/2 . The following cut-off points were used for ΔE analysis: ΔE ≤ 1 – color difference that cannot be detected clinically; 1 < ΔE ≤ 3.7 – color difference clinically detectable by all observers and clinically acceptable; ΔE > 3.7 – clinically unacceptable color difference and poor adaptation . Statistical analyses were performed using the SPSS ® for Windows ® (Statistical Package for the Social Sciences, IBM Corp., United States) statistical package in version 26. Exploratory analyses of the data provided frequencies, means, and standard deviations. The assessment of normality was verified by the Shapiro-Wilk test. Intragroup comparison of quantitative data was performed using the paired T-test, and intergroup analysis was performed using the independent T-test. Categorical variables were submitted to the Chi-Square test and Fischer’s exact test. The confidence interval used was 95%. The level of significance adopted was 5%. The magnitude of effect was analyzed for the difference of the ΔE between groups. It was used the Cohen’s d to calculate the size effect. The results were categorized as having a small (0.20 < d), medium (0.21 < d < 0.50), or large (d > 0.51) effect. One sample of Omnichroma composite resin was lost (fractured) after thermocycling. At baseline, there was statistically significant difference between the class IV restoration of group 1 and the tooth when evaluating the parameters a* ( p = 0.004) and b* ( p = 0.002), as well as, between the class V and tooth for parameter b* ( p < 0.001) (Table ). After thermocycling, the ∆E varied from 3.53 (class III) to 3.80 (class V) (Table ). At baseline, there was no statistically significant difference between the class III restoration of group 2 and the tooth when evaluating the parameters L* ( p = 0.462), a* ( p = 0.252) and b* ( p = 0.335); however, there was statistically difference between tooth and the class IV for b* ( p < 0.001) (Table ). At baseline, the ∆E of group 2 varied from 4.35 (class III) to 5.52 (class V) (Table ). There was no significant difference between the ∆E of Omnichroma and Vittra resins ( p > 0.05) (Table ). The size effect ranged from 0.146 (small) to 0.343 (medium) (Table ). At baseline and after thermocycling, there was no significant difference ( p > 0.05) between the USPHS parameters between the Omnichroma and Vittra resins (Table ). At baseline, when analyzing color using the FDI criterion, statistical significance ( p < 0.001) was observed between Omnichroma and Vittra resins in class IV (Table ). When comparing the surface roughness of Omnichroma and Vittra resins, no statistically significant difference was identified between them either initially ( p = 0.564) or after thermocycling ( p = 0.690) (Table ). Monochromatic composite resins eliminate the need for layering and allow the use of just one resin for the entire restoration , as their technology provides color and opacity adjustment equivalent to the dental substrate . The present study was necessary to evaluate the effectiveness of this innovative material. This study demonstrated that the Omnichroma and Vittra resins present similar performance and fulfill the purpose of adapting to any color of the remaining tooth. The CIEL*a*b* system is the most used in colorimetric evaluation in research that seeks to specify colors objectively . The present study showed that class V of Omnichroma resin was the one that differed most from the tooth both before and after aging. This difference is probably due to its location at the crown-root interface with possible color distortion at the time of reading. The composite resin Vittra APS Unique showed greater discrepancy in relation to the tooth in class IV at baseline and in class V after aging. This was probably due to the different reading locations, being in the center of the tooth and at the ends of the restorations, without considering the individual characteristics of each tooth, such as translucency, opacity, opalescence in their different dental thirds, since natural teeth are polychromatic structures composed of organic and inorganic structures (enamel, dentin and pulp) that have distinct and complementary optical characteristics . Therefore, the better adaptability of both resins in Class III at baseline is justified, since it is closer to the spectrophotometric reading location in the middle third of the tooth. Also, it is expected that the color of resin composites changes after thermocycling, which significantly affects the surface texture . Hydrolytic degradation occurred after thermal cycling, and due to ruptures at the matrix-resin interface, fillers were exposed in some places and more pronounced scratches and grooves were formed . This fact may explain the discoloration that occurred with the monochromatic composites tested. When analyzing the ∆E, it was found that the Omnichroma resin was clinically unacceptable at baseline class V, which decreased after thermocycling, becoming clinically acceptable. Contrary to the present study, Martins (2022) identified in his research that the Omnichroma resin presented higher ∆E values after 30 days, which occurred due to the immersion of the specimens in coffee, being considered clinically unacceptable at the end of the study. On the other hand, the composite resin Vittra APS Unique also showed low values of ∆E in the groups that were immersed in water corroborating the findings of this study in which the ∆E, despite initially unacceptable in all classes, managed to adjust after thermocycling, becoming clinically acceptable, which suggests its improvement over time. Several factors justify the color variation between resins, such as differences in their chemical formulations . However, this study did not identify a statistically significant difference in the ∆E of the groups, which indicates that the resins have similar behavior in relation to clinical acceptability, as both present similar color in relation to the tooth. This statistical result was confirmed by the size effect analysis. The Cohen`s d identified a medium effect between the ∆E of the groups, it means that the difference in color compatibility between the two composite resins is not so pronounced as to consider one resin clinically more relevant than the other. The results suggest that, in both groups, the resins can capture the structural color of the surrounding tooth, controlled by the size of the filler particles . This study evaluated classes III, IV and V restorations using the criteria of the USPHS and modified FDI methods and, in both groups, the monochromatic resins were similar in terms of the parameters evaluated. Although subjective and imprecise, the USPHS clinical criteria for evaluating dental restorations, developed by Ryge in 1980 , is widely used in scientific circles to identify failed restorations that can be repaired or should be replaced . However, with the improvement in the quality of restorative materials, there was a need to adopt a more sensitive method, with greater discriminatory power, namely the FDI method, presented by Hickel et al. in 2007 , which has been gaining prominence in research. Although objective analysis by spectrophotometer provides precise and quantitative data on color , the visual evaluation offers a qualitative and practical perspective that can be crucial for clinical decision-making. Visual evaluation is an essential daily practice for the dentist in the selection and adjustment of dental restoration colors. The analysis conducted in this study reflects real-world clinical practice. Monochromatic resins may not be recommended for anterior restorations due to their high translucency value, capable of reflecting the background color of the oral cavity and leaving the restoration with a grayish appearance . However, the present study verified a good clinical behavior of class IV restorations, mainly in terms of brightness of the Omnichroma resin. This was possibly due to the use of the Blocker Ominichroma resin whose role is to provide high hue and chroma, reducing the brightness of the background color in the creation of the missing lingual barrier. The roughness of composites plays an important role in plaque retention, abrasiveness, wear kinetics, tactile perception, coloration and the natural brightness of the restoration . Thus, the longevity of color in restorations in the oral cavity is influenced by the quality of the surface and is considered satisfactory when it resembles the natural aesthetics of tooth enamel and low levels of roughness . The surface smoothness of microhybrid resins is associated with larger particle sizes and non-standard shapes . In contrast, nanohybrid resins, in addition to having smaller particle sizes, also exhibit nanometric standardization in their shape. The results of the present study indicate that there was standardization in the finishing and polishing protocol of the restorations and, therefore, the surfaces of both resins reached the same degree of roughness, brightness, and surface wear. The roughness of the monochromatic resins evaluated in this study did not statistically change after thermocycling, a result that differs from the findings reported by Alex & Venkatesh (2024) who reported a higher Ra value for the surface roughness of Omnichroma after simulating brushing. The use of this abrasive method may explain the difference in roughness between the studies. Objective and subjective of color and surface roughness analyzes indicate that the two composite resins present similar result. This is probably due to the technology of both that captures the adjacent structural color through their nanometric spherical charges . The fillers are the exact size and shape needed to generate the red-yellow color as ambient light passes through the composite, without the need for added pigments or dyes. This color generated by the spherical fillings matches the color reflected from the surrounding tooth structure, creating the perfect match with the tooth color . Since Vittra and Omnichroma composites have similar roughness and color matching capabilities, the present results suggest that other factors such as cost, ease of application, durability, and clinical acceptance are also important in determining the choice of a single-shade resin composite. Furthermore, these resins eliminated the need for layering and prior color selection, reducing restoration time. The subjective outcomes (USPHS criteria) regarding optical behavior and marginal discoloration found in this study are consistent with the findings of Anwar et al. (2024) who concluded that Omnichroma demonstrated excellent color matching and good color stability over 12 months. However, it is important to note that Anwar et al. (2024) conducted a clinical study over one year, whereas the present study was a laboratory investigation. The similarity between the clinical results and those of this laboratory test suggests that the monochromatic resin is clinically acceptable. Randomization and masking are two strategies considered essential pillars of methodological rigor capable of minimizing research bias, preventing distortions in studies and ensuring more robust results . Once masked, the researchers in the current study ensured a more objective and unbiased assessment of the restorations, which increases the internal validity of the study. This study, although well conducted, has some limitations. The initial reading of tooth color was restricted to the middle third, not coinciding with the areas restored later in the extremities, and the non-use of chromatic substance during the aging process. Also, the scarcity of laboratory studies comparing two monochromatic resins prevented comparison to other results. It is suggested the development of studies in which the color reading occurs in the three thirds of the teeth in order to make the comparison between the resin and the tooth more reliable, as well as, evaluating the roughness surface by atomic force microscopy. In addition, studies are suggested to evaluate the behavior of monochromatic resins in posterior teeth and clinical trials evaluating the behavior of monochromatic resins in the long term. It was concluded that the color match of the composite resins Omnichroma and Vittra APS Unique was found to be clinically satisfactory in visual analysis. In the spectrophotometer evaluation, these one-shade composite exhibited high ∆E values for baseline, and the ∆E was clinically acceptable after thermocycling. The monochromatic composite resins tested were similar to each other in terms of color match and color stability with the tooth structure, showing better adaptation after thermocycling. Both resins showed low surface roughness. |
Apiaceae Medicinal Plants in China: A Review of Traditional Uses, Phytochemistry, Bolting and Flowering (BF), and BF Control Methods | 300ad8fb-955c-447b-8a6c-0a15fc7c4b3c | 10254214 | Pharmacology[mh] | Apiaceae (syn. Umbelliferae) is one of the largest angiosperm families. It includes 300 genera (3000 species) globally and 100 genera (614 species) in China . Apiaceae plants have been widely used in healthcare, nutrition, the food industry, and other fields . Currently, 55 genera (230 species) of Apiaceae plants have been used as medicinal plants, and over 20 species have been widely used as traditional Chinese medicines (TCMs) . Extensive studies have demonstrated that Apiaceae medicinal plants (AMPs) present a variety of pharmacological properties for the treatment of central nervous system, cardiovascular, and respiratory system diseases, amongst others . These pharmacological activities are largely associated with metabolites such as polysaccharides, alkaloids, phenylpropanoids (simple phenylpropanoids and coumarins), flavonoids, and polyene alkynes . In China, Apiaceae plants have been primarily used as traditional medicines for relaxing tendons, activating blood, relieving superficial wounds, treating colds, etc. . For example, rhizomatous and whole plants are mainly used for the treatment of common colds, coughs, asthma, rheumatic arthralgia, ulcers, and pyogenes infections; fruits are mainly used for regulating vital energy, promoting digestion, relieving abdominal pain, and treating parasites . The occurrence of bolting and flowering (BF) plays a critical role in the transition from vegetative growth to reproductive development in the plant life cycle . However, BF significantly reduces the accumulation of metabolites in vegetative organs, which ultimately leads to the lignification of rhizomes and/or roots such as sugar beet , lettuce , and Chinese cabbage . In particular, it common that BF significantly reduces the yield and quality of the rhizomatous AMPs . Extensive studies have demonstrated that BF is regulated by both internal factors (e.g., germplasm resource, seedling size, and plant age) and external factors (e.g., vernalization, photoperiodism, and environmental stresses) . To date, the BF of most rhizomatous AMPs have not been effectively controlled . In order to form a comprehensive understanding of the current status of AMPs in China, herein, the progress on traditional use, phytochemistry, BF, and controlling approaches are summarized. This review will provide useful references for the efficient cultivation and quality improvement of AMPs. Information on AMPs was attained using scientific databases (i.e., PubMed, Web of Science, Springer, and CNKI), using the following keywords: Apiaceae plant, traditional use, phytochemistry, BF, and lignification. Additional information was collected from ethnobotanical studies that mainly focused on the “ Flora of China ” and local classical literature, such as “ Divine Husbandman’s Classic of the Materia Medica ( Shen Nong Ben Cao Jing )”, “ Compendium of Materia Medica ”, “ Illustrated Book on Plants ”, “ Collection of National Chinese Herbal Medicine ”, and “ Pharmacopoeia of the People’s Republic of China” (2020). The names of all plants correspond to the database Catalogue of Life China . Chemical structures were drawn using ChemDraw 21.0.0 software. Apiaceae plants have been traditionally used as medicines in China for ca. 2400 years . In 390–278 BC, three Apiaceae plants, including Angelica dahurica , Ligusticum chuanxiong , and Cnidium monnieri, were first recorded as medicines in “ Sorrow after Departure ” . With the progress of Chinese civilization, ca. 100 Apiaceae plants were historically recorded as medicines. Specifically, 12 AMPs (e.g., Angelica decursiva , Bupleurum chinense , and Centella asiatica ) were recorded in the known herbal text of China, the “ Divine Husbandman’s Classic of the Materia Medica ( Shen Nong Ben Cao Jing )” in 1st and 2nd century AD . In 1578 and 1848, 24 and 31 AMPs were respectively recorded in the “ Compendium of Materia Medica and Illustrated Book on Plants” . In the 21st century, the number of AMPs has been continually increasing, up to 93 species recorded in the “ Flora of China ” in 2002 , and 96 species in the “ Collection of National Chinese Herbal Medicine ” in 2014 . In recent years, 22 species were recorded in the “ Pharmacopoeia of the People’s Republic of China ” . Specifically, 18 species are used with rhizomes and/or roots . To our best knowledge, a total of 228 AMPs used as TCMs were collected from previously published studies and books . Based on the traditionally used medicinal parts, the 228 AMPs were categorized into six classes, including 51 species (21 genera) used with the whole plants (i.e., rhizome and/or root, stem, and leaf), 184 species (44 genera) used with rhizomes and/or roots, 5 species (5 genera) used with stems, 9 species (8 genera) used with leaves, 17 species (14 genera) used with fruits, and 1 species (single genus) used with seeds. Specifically, the 51 species (21 genera) used with the whole plants include Anethum , Anthriscus , Apium , Bupleurum , Centella , Conium , Coriandrum , Cryptotaenia , Eryngium , Ferula , Foeniculum , Hydrocotyle , Oenanthe , Peucedanum , Pimpinella , Pleurospermum , Pternopetalum , Sanicula , Sium , Spuriopimpinella , and Torilis genera. In particular, Sanicula (e.g., S. astrantiifolia , S. caerulescens , S. chinensis ), Hydrocotyle (e.g., H. himalaica , H. hookeri , and H. nepalensis ), and Pimpinella (e.g., P. candolleana , P. coriacea , and P. diversifolia ) genera plants are usually used as whole plants. The 184 species (44 genera) used with the rhizomes and/or roots, which make up the majority of AMPs, include Angelica , Anthriscus , Apium , Archangelica , Bupleurum , Carum , Changium , Chuanminshen , Cicuta , Cnidium , Conioselinum , Daucus , Eriocycla , Ferula , Foeniculum , Glehnia , Heracleum , Hymenidium , Kitagawia , Levisticum , Libanotis , Ligusticopsis , Ligusticum , Meeboldia , Nothosmyrnium , Oenanthe , Osmorhiza , Ostericum , Peucedanum , Phlojodicarpus , Physospermopsis , Pimpinella , Pleurospermum , Pternopetalum , Sanicula , Saposhnikovia , Selinum , Semenovia , Seseli , Seselopsis , Spuriopimpinella , Tongoloa, Torilis , and Vicatia genera. Specifically, Angelica (e.g., A. biserrata , A. dahurica , and A. sinensis ), Bupleurum (e.g., B. bicaule , B. chinense , and B. scorzonerifolium ), and Ligusticum ( L. chuanxiong , L. jeholense , and L . sinense ) genera plants are usually used as rhizomes and/or roots. The 5 species (5 genera) used with the stems include Aegopodium ( A. alpestre ), Coriandrum ( C. sativum ), Foeniculum ( F. vulgare ), Ligusticum ( L. chuanxiong ), and Oenanthe ( O. javanica ); the 9 species (8 genera) used with the leaves include Aegopodium ( A . alpestre ), Anethum ( A . graveolens ), Angelica ( A . morii ), Anthriscus ( A. nemorosa and A. sylvestris ), Carum ( C . carvi ), Daucus ( D. carota ), Foeniculum ( F . vulgare ), and Ligusticum ( L. chuanxiong ); the 17 species (14 genera) used with the fruits include: Ammi ( A. majus ), Carum ( C. buriaticum and C. carvi ), Cnidium ( C . monnieri ), Coriandrum ( C . sativum ), Cuminum ( C . cyminum ), Cyclorhiza ( C . peucedanifolia ), Daucus ( D. carota L. and D. carota var. Carota), Pimpinella ( P. anisum ), Trachyspermum ( T. ammi ), and Visnaga ( V. daucoides ) genera; the single genera used with the seeds is Ferula ( F. bungeana ) . As is shown in , distinct traditional uses of the 228 AMPs were recorded. Based on their clinical agents, a total of 79 traditional uses are enriched, with 40 species contributing to the treatment of relieving pain, 36 species to the treatment of dispelling wind; and 21 species to the treatment of eliminating dampness . Moreover, the AMPs were also widely used as “ethnodrugs” for ethnic minorities in China. For example, Carum carvi was used as Tibetan medicine for the treatment of dispelling wind and eliminating dampness, as well as treating cat fever and joint pain ; Trachyspermum ammi was used as Uygur medicine for the treatment of eliminating cold damp, dispelling coldness, and promoting digestion; Angelica acutiloba was used in Korean medicine for the treatment of strengthening the spleen, enriching blood, stopping bleeding, and promoting coronary circulation ; Angelica sinensis was used as medicine for the Tujia minority for the treatment of enriching the blood, treating dysmenorrheal, and relaxing the bowel ; and Chuanminshen violaceum was used as a geo-authentic medicine of Sichuan province for the treatment of moistening the lungs, treating phlegm, and nourishing the spleen and stomach . Meanwhile, AMPs combined with other herbs have also been applied for thousands of years . For example, the Decoction of Notopterygium for Rheumatism is a famous Chinese prescription and is composed of Notopterygium incisum , Angelica biserrata , Ligusticum sinense , Eryngium foetidum , and Ligusticum chuanxiong , etc.; it has been widely used for the treatment of exopathogenic wind-cold, rheumatism, headache, and pantalgia . The Xinyisan that is composed of Yulania liliiflora , Actaea cimicifuga , Angelica dahurica , Eryngium foetidum , Ligusticum sinense , etc., has been widely used for the treatment of deficiency of pulmonary qi and nasal obstruction due to wind-cold pathogens and damp-heat in the lung channel . The Shiquan Dabu Wan of Angelica sinensis that is recorded in the “ Pharmacopoeia of the People’s Republic of China ” has been mainly used for the treatment of pallor, fatigability, and palpitations . The Juanbi Tang of Notopterygium incisum and Angelica biserrata that is recorded in “ Medical Words ” (Qing dynasty) has been mainly used for treatment of arthralgia due to wind cold-dampness . Modern pharmacological research on the 228 AMPs is summarized in . Based on the pharmacological effects, a total of 62 modern uses are identified , with 36 species showing anti-inflammatory activity, 20 species showing antioxidant activity, and 16 species showing antitumor activity. In addition, other modern uses are also identified, such as antitumor, bacteriostatic, and analgesic. These modern pharmaceutical properties have been demonstrated to be associated with bioactive metabolites, and several metabolites have been found to be co-existent in the TCMs . Specifically, sesquiterpene-coumarin, such as (3′S, 5′S, 8′R, 9′S, 10′R)-kellerin, gummosin, galbanic acid, and methyl galbanate from Ferula sinkiangensis resin, showed anti-neuroinflammatory effects and might be a potential natural therapeutic agent for Alzheimer’s disease . The supercritical carbon dioxide extracts from Apium graveolens showed antibacterial effects, with the highest inhibitory activity against Bacillus cereus . In vitro, the antitumor activity of AMPs have been identified; for example, the ferulin B and C in Ferula ferulaeoides rhizomes could restrain the multiplication of HepG2 stomach cancer cell lines, and 2,3-dihydro-7-hydroxyl-2R*, 3R*-dimethyl-2-[4,8-dimethyl-3(E),7-nonadienyl]-furo [3,2-c] coumarin could restrain the proliferation of HepG2, MCF-7, and C6 cancer cell lines . In addition, the osthole in Angelica biserrata could restrain the multiplication of human gastric cancer cell lines MKN-45 and BGC-823, human lung adenocarcinoma cell line A549, human mammary carcinoma cell line MCF-7, and human colon carcinoma cell line LOVO . The antioxidative activity of AMPs has been also identified; for example, the imperatorin, oxypeucedanin hydrate, and bergaptol in Angelica dahurica exhibited DPPH scavenging activity , hydromethanolic extracts from Pimpinella anisum exhibited free radical scavenging activity , and water-soluble polysaccharides in Chuanminshen violaceum scavenged DPPH, hydroxyl, and superoxide anion radicals . As is shown in , hundreds of bioactive metabolites have been identified from the 228 AMPs . Based on their chemical structures, these metabolites can be categorized into five main classes: (1) polysaccharides, (2) alkaloids, (3) phenylpropanoids, (4) flavonoids, and (5) terpenoids . Among the 22 AMPs recorded in the “ Pharmacopoeia of the People’s Republic of China ” , 18 secondary metabolites in the 17 AMPs (e.g., Angelica biserrata , Bupleurum chinense DC. , and Centella asiatica ) were described as quality control indicators, which include: 10 phenylpropanoids (i.e., osthole, columbianadin, imperatorin, isoimperatorin, nodakenin, ferulic acid, trans-anethole, notopterol, praeruptorin A, and praeruptorin B), 4 terpenoids (i.e., saikosaponin a, saikosaponin d, asiaticoside, and madecassoside), 2 chromones (i.e., prim-O-glucosylcimifugin and 5-O-methylvisammioside), and 2 phthalides (i.e., ligustilide and levistilide A); a specific quality marker has not been reported for the other 5 AMPs (e.g., Changium smyrnioides , Daucus carota L., and Glehnia littoralis ) . 7.1. Polysaccharides Polysaccharides are the largest components of biomass and account for ca. 90% of the carbohydrates in plants . Studies have demonstrated that polysaccharides in medicinal plants are indispensable bioactive compounds, presenting uniquely pharmacological effects such as immunomodulatory, hypoglycemic, antitumor, anti-diabetic, and antioxidant effects, amongst others, with few side effects or adverse drug reactions . To date, polysaccharides in the 228 AMPs have also been identified, showing multiple pharmacological effects. For example, polysaccharides in Angelica sinensis present hematopoietic, antitumor, and liver protection effects ; polysaccharides in Angelica dahurica protect spleen lymphocytes, natural killer cells, and procoagulants ; and polysaccharides in Bupleurum chinense and Bupleurum smithii present the effect of macrophage modulation, kidney protection, and inflammatory alleviation . 7.2. Alkaloids About 27,000 alkaloids presenting as water-soluble salts of organic acids, esters, and combined with tannins or sugars have been found in plants . Many alkaloids are valuable medicinal agents that can be utilized to treat various diseases, including malaria, diabetes, cancer, cardiac dysfunction, blood clotting–related diseases, etc. . Alkaloids in the 228 AMPs mainly exist in the Ligusticum , Apium , Conium , and Cuminum genera . Pharmacological studies have demonstrated that alkaloids in Ligusticum chuanxiong show the activity of inhibiting myocardial fibrosis, protecting ischemic myocardium, and relieving cerebral ischemia-reperfusion injury . A novel alkaloid 2-pentylpiperidine known as conmaculatin in Conium maculatum shows strong peripheral and central antinociceptive activity . Some alkaloids have been identified to show antidepressant activity, such as berberine in Berberis aristata , strictosidine acid in Psychotria myriantha , and Anonaine in Annona cherimolia ; these could be explored as an emerging therapeutic alternative for the treatment of depression. 7.3. Phenylpropanoids Phenylpropanoids are a large class of secondary metabolites biosynthesized from amino acids, phenylalanine, and tyrosine . Over 8000 aromatic metabolites of the phenylpropanoids have been identified in plants. These include simple phenylpropanoids (propenyl benzene, phenylpropionic acid, and phenylpropyl alcohol), coumarins, lignins, lignans, and flavonoids . 7.3.1. Simple Phenylpropanoids To date, limited simple phenylpropanoids have been identified from AMPs, including three phenylpropanoids (trans-isoelemicin, sarisan, and trans-isomyristicin) in the roots of Ligusticum mutellina . Ferulic acid, one of the phenylpropionic acids, is an important bioactive metabolite of AMPs; it mainly exists in Angelica , Ligusticum , Ferula , and Pleurospermum genera . Pharmacological studies have demonstrated that the ferulic acid in Angelica sinensis shows strong properties in inhibiting platelet aggregation, increasing coronary blood flow, and stimulating smooth muscle ; the ferulic acid in Angelica acutiloba shows antidiabetic, immunostimulant, antiinfammatory, antimicrobial, anti-arrhythmic, and antithrombotic activity ; and the ferulic acid in Ligusticum tenuissimum shows anti-melanogenic and anti-oxidative effects . 7.3.2. Coumarins Coumarins are the most widespread in 20 genera of AMPs (e.g., Angelica , Bupleurum , and Peucedanum ) and mainly include simple coumarins, pyranocoumarins, and furocoumarins . In recent years, distinct coumarins have been identified from AMPs, such as 99 coumarins in Ferula , 116 coumarins in Angelica decursiva and Peucedanum praeruptorum , and 9 coumarins in Angelica dahurica . Furthermore, 8 coumarins were selected as quality markers, including osthole (1) in Angelica biserrata and Cnidium monnieri ; columbianadin (2) in Angelica biserrata ; imperatorin (3) in Angelica dahurica and Angelica dahurica cv. Hangbaizhi; isoimperatorin (4) in Angelica dahurica , Angelica dahurica cv. Hangbaizhi, Notopterygium franchetii , and Notopterygium incisum ; nodakenin (5) in Angelica decursiva , Notopterygium franchetii , and Notopterygium incisum ; notopterol (8) in Notopterygium franchetii and Notopterygium incisum ; and praeruptorin A (9) and praeruptorin B (10) in Peucedanum praeruptorum (see and ) . To date, various biological activities of coumarins have been demonstrated, including antifungal, antimicrobial, antiviral, anti-cancerous, antitumor, anti-inflammatory, anti-filarial, enzyme inhibitory, antiaflatoxigenic, analgesic, antioxidant, and oestrogenic . For example, coumarins are recognized as the main bioactive constituents in Peucedani genus and play critical roles in relieving cough and asthma, strengthening heart function, as well as preventing and treating cardiovascular diseases such as nodakenin, -praeruptorin B, and praeruptorin C ; imperatorin oxypeucedanin hydrate, xanthotoxol, bergaptol, 5-methoxy-8-hydroxypsoralen, isoimperatorin, phelloptorin, and pabularinone in Angelica dahurica exhibit moderate DPPH scavenging activity, strong ABTS ·+ scavenging activity, and significant inhibition on HepG2 cells, which could be explored as new and potential natural antioxidants and cancer prevention agents ; pabulenol and osthol extracts from Angelica genuflexa show anti-platelet and anti-coagulant components ; and decursinol angelate in Angelica gigas shows platelet aggregation and blood coagulation activity . 7.4. Flavonoids Flavonoids are a group of the most abundant secondary metabolites in plants . Generally, flavonoids can be further categorized into eight subgroups, including flavones (e.g., apigenin, luteolin, and baicalein), flavonols (e.g., kaempferol, quercetin, and myricetin), flavanones (e.g., naringenin, hesperitin, and liquiritigenin), flavanonols (e.g., dihydrokaempferol, dihydromyricetin, and dihydroquercetin), isoflavones (e.g., daidzein, purerarin, and peterocarpin), aurones, anthocyanidins, and proanthocyanidins . In recent years, flavonoids have been identified from AMPs, such as 6 flavonoids (e.g., luteolin, isoquercitrin, and rutin) in Ferula , 12 flavonoids (e.g., quercetin-3- O -rutinoside, kaempferol-3,7-di- O -rhamnoside, quercetin-3- O -arabinoside) in Bupleurum , and 18 flavonoids (e.g., rutin, quercetin, and quercitrin) in Hydrocotyle . To date, various biological activities of flavonoids have been demonstrated, including antioxidant, antiinflammatory, antidiabetic, anticancer, antiobesity, and cardioprotective . For example, the apigenin in Apium graveolens shows anticancer properties , flavonoids in Pimpinella diversifolia DC. , Anthriscus sylvestris , and Sanicula astrantiifolia show antioxidant effects , and quercetin and its metabolites show vasodilator effects, with selectivity toward the resistance vessels . 7.5. Terpenoids About 25,000 terpenoids have been reported in plants; they are diverse secondary metabolites containing three subgroups, including monoterpenoids, sesquiterpenes, and triterpenoids . To date, terpenoids have been also identified in AMPs, such as 4 terpenoids (e.g., angelicoidenol, pregnenolone, and β-sitosterol) in Pleurospermum , 75 terpenoids (e.g., myrcene, farnesene, and xiongterpene) in Ligusticum , 109 terpenoids (e.g., nerolidol, guaiol, and ferulactone A) in Ferula , and 13 triterpenoids (e.g., ranuncoside, oleanane, and barrigenol) in Hydrocotyle sibthorpioides Lam. . Specifically, saikosaponin triterpenes constitute the main class of secondary metabolites in the genus Bupleurum , with more than 90 saponins (e.g., saikosaponin a, b, and c) isolated . Studies have found that terpenoids possess various biological activities, including anti-inflammatory, anti-oxidative, anti-fibrosis, antitumor, anti-Alzheimer’s disease, and anti-depression activities . For example, the xiongterpene in Ligusticum chuanxiong shows insecticide effects , the asiaticoside in Centella asiatica shows antitumor properties , and the saikosaponin d in Bupleurum chinense DC. and Bupleurum scorzonerifolium show the effects of reducing blood glucose, inhibiting inflammation, and reducing insulin resistance . 7.6. Other Compounds Chromones and phthalides also exist in AMPs and show pharmacological properties. Specifically, phthalides (e.g., ligustilide, n -butylidenephthalide, and Z -ligustilide) in Angelica sinensis show the effect of inhibiting vasodilation, decreasing platelet aggregation, as well as exerting analgesic, anti-inflammatory, and anti-proliferative effects ; butylphthalide in Ligusticum sinense shows anti-inflammatory and antithrombus effects, dilates blood vessels, and improves brain microcirculation and anti-myocardial ischemia . In terms of chromones, 3 chromones (i.e., 5 thydroxy 2 [(angebyloxy) mehyI] fuan [3, 2′: 6, 7] chrmone, angeliticin A, and noreugenin) in Angelica polymorpha , 10 chromones (e.g., cnidimoside A, cnidimol B, and peucenin) in Cnidiummonnieri (L.) Cuss. , and 22 chromones (e.g., edebouriellol, hamaudol, and 3′(R)--hamaudol) in Saposhnikovia divaricate have been identified. Studies have found that two chromones 3′S-- O -acetylhamaudol and (±)-hamaudol in Angelica morii show the effect of inhibiting Ca 2+ influx of vascular smooth muscle , prim- O -glucosylcimifugin and 5- O -methylvisammioside show antipyretic, analgesic, and anti-inflammatory effects , and chromones in Bupleurum multinerve show analgesic effects . Polysaccharides are the largest components of biomass and account for ca. 90% of the carbohydrates in plants . Studies have demonstrated that polysaccharides in medicinal plants are indispensable bioactive compounds, presenting uniquely pharmacological effects such as immunomodulatory, hypoglycemic, antitumor, anti-diabetic, and antioxidant effects, amongst others, with few side effects or adverse drug reactions . To date, polysaccharides in the 228 AMPs have also been identified, showing multiple pharmacological effects. For example, polysaccharides in Angelica sinensis present hematopoietic, antitumor, and liver protection effects ; polysaccharides in Angelica dahurica protect spleen lymphocytes, natural killer cells, and procoagulants ; and polysaccharides in Bupleurum chinense and Bupleurum smithii present the effect of macrophage modulation, kidney protection, and inflammatory alleviation . About 27,000 alkaloids presenting as water-soluble salts of organic acids, esters, and combined with tannins or sugars have been found in plants . Many alkaloids are valuable medicinal agents that can be utilized to treat various diseases, including malaria, diabetes, cancer, cardiac dysfunction, blood clotting–related diseases, etc. . Alkaloids in the 228 AMPs mainly exist in the Ligusticum , Apium , Conium , and Cuminum genera . Pharmacological studies have demonstrated that alkaloids in Ligusticum chuanxiong show the activity of inhibiting myocardial fibrosis, protecting ischemic myocardium, and relieving cerebral ischemia-reperfusion injury . A novel alkaloid 2-pentylpiperidine known as conmaculatin in Conium maculatum shows strong peripheral and central antinociceptive activity . Some alkaloids have been identified to show antidepressant activity, such as berberine in Berberis aristata , strictosidine acid in Psychotria myriantha , and Anonaine in Annona cherimolia ; these could be explored as an emerging therapeutic alternative for the treatment of depression. Phenylpropanoids are a large class of secondary metabolites biosynthesized from amino acids, phenylalanine, and tyrosine . Over 8000 aromatic metabolites of the phenylpropanoids have been identified in plants. These include simple phenylpropanoids (propenyl benzene, phenylpropionic acid, and phenylpropyl alcohol), coumarins, lignins, lignans, and flavonoids . 7.3.1. Simple Phenylpropanoids To date, limited simple phenylpropanoids have been identified from AMPs, including three phenylpropanoids (trans-isoelemicin, sarisan, and trans-isomyristicin) in the roots of Ligusticum mutellina . Ferulic acid, one of the phenylpropionic acids, is an important bioactive metabolite of AMPs; it mainly exists in Angelica , Ligusticum , Ferula , and Pleurospermum genera . Pharmacological studies have demonstrated that the ferulic acid in Angelica sinensis shows strong properties in inhibiting platelet aggregation, increasing coronary blood flow, and stimulating smooth muscle ; the ferulic acid in Angelica acutiloba shows antidiabetic, immunostimulant, antiinfammatory, antimicrobial, anti-arrhythmic, and antithrombotic activity ; and the ferulic acid in Ligusticum tenuissimum shows anti-melanogenic and anti-oxidative effects . 7.3.2. Coumarins Coumarins are the most widespread in 20 genera of AMPs (e.g., Angelica , Bupleurum , and Peucedanum ) and mainly include simple coumarins, pyranocoumarins, and furocoumarins . In recent years, distinct coumarins have been identified from AMPs, such as 99 coumarins in Ferula , 116 coumarins in Angelica decursiva and Peucedanum praeruptorum , and 9 coumarins in Angelica dahurica . Furthermore, 8 coumarins were selected as quality markers, including osthole (1) in Angelica biserrata and Cnidium monnieri ; columbianadin (2) in Angelica biserrata ; imperatorin (3) in Angelica dahurica and Angelica dahurica cv. Hangbaizhi; isoimperatorin (4) in Angelica dahurica , Angelica dahurica cv. Hangbaizhi, Notopterygium franchetii , and Notopterygium incisum ; nodakenin (5) in Angelica decursiva , Notopterygium franchetii , and Notopterygium incisum ; notopterol (8) in Notopterygium franchetii and Notopterygium incisum ; and praeruptorin A (9) and praeruptorin B (10) in Peucedanum praeruptorum (see and ) . To date, various biological activities of coumarins have been demonstrated, including antifungal, antimicrobial, antiviral, anti-cancerous, antitumor, anti-inflammatory, anti-filarial, enzyme inhibitory, antiaflatoxigenic, analgesic, antioxidant, and oestrogenic . For example, coumarins are recognized as the main bioactive constituents in Peucedani genus and play critical roles in relieving cough and asthma, strengthening heart function, as well as preventing and treating cardiovascular diseases such as nodakenin, -praeruptorin B, and praeruptorin C ; imperatorin oxypeucedanin hydrate, xanthotoxol, bergaptol, 5-methoxy-8-hydroxypsoralen, isoimperatorin, phelloptorin, and pabularinone in Angelica dahurica exhibit moderate DPPH scavenging activity, strong ABTS ·+ scavenging activity, and significant inhibition on HepG2 cells, which could be explored as new and potential natural antioxidants and cancer prevention agents ; pabulenol and osthol extracts from Angelica genuflexa show anti-platelet and anti-coagulant components ; and decursinol angelate in Angelica gigas shows platelet aggregation and blood coagulation activity . To date, limited simple phenylpropanoids have been identified from AMPs, including three phenylpropanoids (trans-isoelemicin, sarisan, and trans-isomyristicin) in the roots of Ligusticum mutellina . Ferulic acid, one of the phenylpropionic acids, is an important bioactive metabolite of AMPs; it mainly exists in Angelica , Ligusticum , Ferula , and Pleurospermum genera . Pharmacological studies have demonstrated that the ferulic acid in Angelica sinensis shows strong properties in inhibiting platelet aggregation, increasing coronary blood flow, and stimulating smooth muscle ; the ferulic acid in Angelica acutiloba shows antidiabetic, immunostimulant, antiinfammatory, antimicrobial, anti-arrhythmic, and antithrombotic activity ; and the ferulic acid in Ligusticum tenuissimum shows anti-melanogenic and anti-oxidative effects . Coumarins are the most widespread in 20 genera of AMPs (e.g., Angelica , Bupleurum , and Peucedanum ) and mainly include simple coumarins, pyranocoumarins, and furocoumarins . In recent years, distinct coumarins have been identified from AMPs, such as 99 coumarins in Ferula , 116 coumarins in Angelica decursiva and Peucedanum praeruptorum , and 9 coumarins in Angelica dahurica . Furthermore, 8 coumarins were selected as quality markers, including osthole (1) in Angelica biserrata and Cnidium monnieri ; columbianadin (2) in Angelica biserrata ; imperatorin (3) in Angelica dahurica and Angelica dahurica cv. Hangbaizhi; isoimperatorin (4) in Angelica dahurica , Angelica dahurica cv. Hangbaizhi, Notopterygium franchetii , and Notopterygium incisum ; nodakenin (5) in Angelica decursiva , Notopterygium franchetii , and Notopterygium incisum ; notopterol (8) in Notopterygium franchetii and Notopterygium incisum ; and praeruptorin A (9) and praeruptorin B (10) in Peucedanum praeruptorum (see and ) . To date, various biological activities of coumarins have been demonstrated, including antifungal, antimicrobial, antiviral, anti-cancerous, antitumor, anti-inflammatory, anti-filarial, enzyme inhibitory, antiaflatoxigenic, analgesic, antioxidant, and oestrogenic . For example, coumarins are recognized as the main bioactive constituents in Peucedani genus and play critical roles in relieving cough and asthma, strengthening heart function, as well as preventing and treating cardiovascular diseases such as nodakenin, -praeruptorin B, and praeruptorin C ; imperatorin oxypeucedanin hydrate, xanthotoxol, bergaptol, 5-methoxy-8-hydroxypsoralen, isoimperatorin, phelloptorin, and pabularinone in Angelica dahurica exhibit moderate DPPH scavenging activity, strong ABTS ·+ scavenging activity, and significant inhibition on HepG2 cells, which could be explored as new and potential natural antioxidants and cancer prevention agents ; pabulenol and osthol extracts from Angelica genuflexa show anti-platelet and anti-coagulant components ; and decursinol angelate in Angelica gigas shows platelet aggregation and blood coagulation activity . Flavonoids are a group of the most abundant secondary metabolites in plants . Generally, flavonoids can be further categorized into eight subgroups, including flavones (e.g., apigenin, luteolin, and baicalein), flavonols (e.g., kaempferol, quercetin, and myricetin), flavanones (e.g., naringenin, hesperitin, and liquiritigenin), flavanonols (e.g., dihydrokaempferol, dihydromyricetin, and dihydroquercetin), isoflavones (e.g., daidzein, purerarin, and peterocarpin), aurones, anthocyanidins, and proanthocyanidins . In recent years, flavonoids have been identified from AMPs, such as 6 flavonoids (e.g., luteolin, isoquercitrin, and rutin) in Ferula , 12 flavonoids (e.g., quercetin-3- O -rutinoside, kaempferol-3,7-di- O -rhamnoside, quercetin-3- O -arabinoside) in Bupleurum , and 18 flavonoids (e.g., rutin, quercetin, and quercitrin) in Hydrocotyle . To date, various biological activities of flavonoids have been demonstrated, including antioxidant, antiinflammatory, antidiabetic, anticancer, antiobesity, and cardioprotective . For example, the apigenin in Apium graveolens shows anticancer properties , flavonoids in Pimpinella diversifolia DC. , Anthriscus sylvestris , and Sanicula astrantiifolia show antioxidant effects , and quercetin and its metabolites show vasodilator effects, with selectivity toward the resistance vessels . About 25,000 terpenoids have been reported in plants; they are diverse secondary metabolites containing three subgroups, including monoterpenoids, sesquiterpenes, and triterpenoids . To date, terpenoids have been also identified in AMPs, such as 4 terpenoids (e.g., angelicoidenol, pregnenolone, and β-sitosterol) in Pleurospermum , 75 terpenoids (e.g., myrcene, farnesene, and xiongterpene) in Ligusticum , 109 terpenoids (e.g., nerolidol, guaiol, and ferulactone A) in Ferula , and 13 triterpenoids (e.g., ranuncoside, oleanane, and barrigenol) in Hydrocotyle sibthorpioides Lam. . Specifically, saikosaponin triterpenes constitute the main class of secondary metabolites in the genus Bupleurum , with more than 90 saponins (e.g., saikosaponin a, b, and c) isolated . Studies have found that terpenoids possess various biological activities, including anti-inflammatory, anti-oxidative, anti-fibrosis, antitumor, anti-Alzheimer’s disease, and anti-depression activities . For example, the xiongterpene in Ligusticum chuanxiong shows insecticide effects , the asiaticoside in Centella asiatica shows antitumor properties , and the saikosaponin d in Bupleurum chinense DC. and Bupleurum scorzonerifolium show the effects of reducing blood glucose, inhibiting inflammation, and reducing insulin resistance . Chromones and phthalides also exist in AMPs and show pharmacological properties. Specifically, phthalides (e.g., ligustilide, n -butylidenephthalide, and Z -ligustilide) in Angelica sinensis show the effect of inhibiting vasodilation, decreasing platelet aggregation, as well as exerting analgesic, anti-inflammatory, and anti-proliferative effects ; butylphthalide in Ligusticum sinense shows anti-inflammatory and antithrombus effects, dilates blood vessels, and improves brain microcirculation and anti-myocardial ischemia . In terms of chromones, 3 chromones (i.e., 5 thydroxy 2 [(angebyloxy) mehyI] fuan [3, 2′: 6, 7] chrmone, angeliticin A, and noreugenin) in Angelica polymorpha , 10 chromones (e.g., cnidimoside A, cnidimol B, and peucenin) in Cnidiummonnieri (L.) Cuss. , and 22 chromones (e.g., edebouriellol, hamaudol, and 3′(R)--hamaudol) in Saposhnikovia divaricate have been identified. Studies have found that two chromones 3′S-- O -acetylhamaudol and (±)-hamaudol in Angelica morii show the effect of inhibiting Ca 2+ influx of vascular smooth muscle , prim- O -glucosylcimifugin and 5- O -methylvisammioside show antipyretic, analgesic, and anti-inflammatory effects , and chromones in Bupleurum multinerve show analgesic effects . Previous studies have repeatedly emphasized that BF reduces the yield and quality of plants, especially in rhizomatous medicinal plants . Here, a total of 38 rhizomatous plants that have been reported in the 228 AMPs are associated with BF . Based on the effect degree of BF on the yield and quality, 38 rhizomatous AMPs belonging to 17 genera can be categorized into 3 classes: (1) BF significantly affects the yield and quality of 14 AMPs (i.e., Angelica acutiloba , Angelica biserrata , Angelica dahurica , Angelica dahurica cv. Hangbaizhi, Angelica decursiva , Angelica polymorpha , Angelica sinensis , Daucus carota , Heracleum hemsleyanum , Heracleum rapula , Libanotis iliensis , Libanotis seseloides , Peucedanum praeruptorum , and Saposhnikovia divaricata ), and their rhizomes and/or roots are wholly lignified and cannot be used for clinical application; (2) BF affects the yield of 11 AMPs (i.e., Angelica gigas , Bupleurum chinense , Bupleurum scorzonerifolium , Changium smyrnioides , Chuanminshen violaceum , Glehnia littoralis , Ligusticum chuanxiong , Ligusticum jeholense , Ligusticum sinense , Notopterygium franchetii , and Notopterygium incisum ), though their rhizomes or roots can be used as medicine to some extent; (3) BF has no significant effect on the yield and quality of 13 AMPs (i.e., Angelica sylvestris , Cicuta virosa , Ferula ferulaeoides , Ferula fukanensis , Ferula lehmannii , Ferula olivacea , Ferula sinkiangensis , Ferula teterrima , Levisticum officinale , Libanotis buchtormensis , Libanotis lancifolia , Libanotis spodotrichoma , and Pimpinella candolleana ), and their rhizomes or roots can be used as medicine . For example, for class (1) after BF, there was a 8.3- and 16.1-fold reduction of dry weight and quality marker ferulic acid content in Angelica sinensis and a 1.5- and 1.5-fold reduction of dry weight and quality marker isoimperatorin content in Angelica dahurica . For class (2), there was a 1.34-fold reduction of saikosaponinsands, while no significant change of dry weight in Bupleurum chinense was seen ; and a 2.0- and 1.7-fold reduction of dry weigh and polysaccharide content in Changium smyrnioides . For class (3), there was no reduction of the yield and quality of the 13 AMPs at the harvest stages . Generally, most Apiaceae plants are “low-temperature and long-day” perennial herbs; in other words, the plants must experience vernalization (i.e., an extended period of cool weather at 0 to 10 °C) and long days (>12 h daylight) to induce BF. Examples include Angelica sinensis , Daucus carota , and Coriandrum sativum . shows the approaches to inhibit BF of 24 AMPs. For example, the bolting rate of Angelica sinensis can be significantly decreased by planting the green stem cultivar (Mingui 2) instead of the purple stem cultivar (Mingui 1) , selecting smaller seedlings (i.e., root-shoulder diameter <0.55 cm) instead of larger seedlings , storing the seedlings at freezing temperature (i.e., <0 °C) during the overwinter stage , shading the plants under sunshade (i.e., >40%) during growth stage , and providing the plants with good growth conditions (e.g., plant intensity, nutrient and water balance) . The bolting rate of Angelica dahurica can be significantly decreased through planting pure breeds , selecting immature seeds for seeding , increasing potassic fertilizer while decreasing nitrogen and phosphorus fertilizers , and planting using standard techniques . The bolting rate of Saposhnikovia divaricata can also be significantly decreased by controlling the sunshade , sowing date , and planting density , and preventing excessive growth . To inhibit the occurrence of BF in AMPs, several measures can be used, including breeding new cultivars, controlling the seedling age and size to delay the transition from vegetative growth to flowering, storing seedlings at freezing temperatures to avoid vernalization, growing the plants under sunshade to avoid long-day photoperiodism, and planting with standard techniques to reduce pests and diseases . Extensive experiments have demonstrated that BF induces the lignification of fleshy rhizomes and enhances the degradation of metabolites . Studies on anatomical structures reveal that the ratio of secondary phloem to secondary xylem respectively changes from 2:1 to 1:10 and 2/5–1/2 to 1/2–3/4 for the rhizomes of Angelica sinensis and Angelica dahurica before and after BF; meanwhile, the number of secretory cells producing essential oils significantly decreased . Studies have found that the Early Bolting In Short Day (EBS) acts as a negative transcriptional regulator, preventing premature flowering of Arabidopsis thaliana , and co-enrichment of a subset of EBS-associated genes with H3K4me3, H3K27me3, and Polycomb repressor complex 2 has been observed ; a potential genetic resource for radish late-bolting breeding with introgression of the RsVRN1In-536 insertion allele into the early-bolting genotype could contribute to delayed bolting time of Raphanus sativus ; and peroxidases ( PRXs ) involved in lignin monomer biosynthesis were found to be down-regulated in Peucedanum praeruptorum at the bolting stage . As is known, lignin biosynthesis belongs to the general phenylpropanoid pathway, which starts from phenylalanine and is catalyzed by a series of enzymes . Specifically, phenylalanine is catalyzed to form p -Coumaroyl CoA sequentially through the three enzymes phenylalanine ammonia lyase (PAL), cinnamate 4-hydroxylase (C4H), and 4-coumarate-CoA ligase (4CL). Lignin biosynthesis is synthesized via three sub-pathways, including the following: (1) lignins are catalyzed to from p -Coumaroyl CoA sequentially through the three enzymes cinnamoyl-CoA reductases (CCR), cinnamyl alcohol dehydrogenases (CAD), and laccases (LACs), and then coniferyl aldehyde is catalyzed to from p -Coumaroyl CoA sequentially through the four enzymes hydroxycinnamoyl shikimate/quinate transferase (HCT), p -coumarate 3-hydroxylase (C3H), caffeoyl-CoA 3- O -methyltransferase (CCOMT), and CCR; (2) lignins are catalyzed to from coniferyl aldehyde sequentially through the two enzymes CAD and LAC; (3) lignins are catalyzed to from coniferyl aldehyde sequentially through the three enzymes ferulate 5-hydroxylase (F5H), caffeic acid 3- O -methyltransferase (COMT), and LACs . Although lignin biosynthesis has been depicted, studies on the mechanism of BF inducing rhizome lignification are still limited. To date, the mechanism of BF affecting Angelica sinensis has been revealed, with the expression level of genes (e.g., PAL1 , 4CLs , HCT , CAD1 , and LACs ) significantly upregulated at the stem-node forming and elongating stage compared with the stem-node pre-differentiation stage, leading to the reduction of accumulation of secondary metabolites (i.e., ferulic acid and flavonoids) . In this review, we summarized the history of AMPs as TCMs, the classification of AMPs species, their traditional use, modern pharmacological use, and phytochemistry; the effect of BF on yield and quality, approaches to control BF, and the mechanisms of BF, inducing rhizome lignification. Although ca. 228 AMPs, 79 traditional uses, 62 modern uses, and 5 main kinds of metabolites have been recorded, the potential properties remain to be exploited. Although BF significantly reduces the yield and quality of AMPs, effective measures to inhibit BF have not been applied in the field, and the mechanisms of BF have not been systemically revealed for most AMPs. Thus, in order to effectively control the BF of AMPs to improve their quality and yield, on the one hand, standard cultivation techniques of AMPs should be applied; on the other hand, new cultivars should be developed by modern biotechnology such as the CRISPR/Cas9 system. |
Rapid genome sequencing for pediatrics | aeae528d-305b-4164-aa05-7854aee3d8ac | 9826377 | Pediatrics[mh] | BACKGROUND In the years following the publication of the first draft of the Human Genome Project (Lander et al., ; Venter et al., ) technological advances have vastly improved our ability to analyze the genome. This has resulted in an increasing shift from single gene testing using the costly and time‐consuming Sanger sequencing technique to next‐generation sequencing (NGS)‐based multigene testing. NGS was initially used for academic research but soon thereafter, it began to be translated into the clinic. Today, the use of NGS within the clinical setting has become routine for the diagnosis of patients with rare diseases (RD) and cancer. Although the cost of NGS has fallen dramatically over the last decade, driven by tremendous advancements in technology, the cost of whole exome sequencing (WES) and whole genome sequencing (WGS) (collectively referred to as genomic sequencing here on in) is still a barrier to many diagnostic laboratories and it is therefore pertinent to use it where it has the highest likelihood of identifying a disease‐causing mutation and can, therefore, have the biggest impact on patient well‐being. There are estimated to be around 10,000 individual RDs (Haendel et al., ) which collectively affect hundreds of millions of patients worldwide. However, it is thought that up to 80% of these diseases have a genetic component, which means that elucidation of the molecular cause of the disease is amenable to NGS. Finding the molecular cause of a disorder gives us vital insights into the pathobiology of these diseases, which in turn improves our understanding of the biological pathways affected and offers hope for the development of novel therapeutics. To maximize the limited funds available to perform clinical NGS diagnostics, it is necessary to use the available resources in the most efficient and cost‐effective way. This is by no means straightforward as multiple factors need to be considered, which will be unique for each setting. For example, the use of unbiased genomic sequencing instead of disease‐specific gene panels or single gene tests avoids the need to perform multiple sequential tests if the first one comes back negative. This is particularly useful because every time a new causative gene is identified for an RD, gene panels need to be updated to incorporate it at much time and expense. The downside to WGS and, to a lesser extent WES, is their increased sequencing costs and the extra bioinformatic burden associated with analyzing and storing the huge amounts of data generated with these techniques. Nonetheless, WGS can be thought of as a form of investment because once you have the data from the whole genome, it can be used to retrospectively investigate any novel findings that may be published after the initial analysis has been performed. It may also be cost‐effective to target patients with RDs that have been shown to be highly tractable to genomic sequencing approaches, such as those with a neurodevelopmental phenotype, in which a diagnostic rate of up to 70% can be obtained (Acuna‐Hidalgo et al., ; Brunet et al., ; Deciphering Developmental Disorders Study, ; Heyne et al., ; Kaplanis et al., ; Pode‐Shakked et al., ; Samocha et al., ). For this review, we will focus on the burgeoning field of rapid diagnosis of critically ill pediatric RD patients who are in paediatric and neonatal intensive care units (PICU and NICU). For this unique cohort of patients, there are many clinical benefits to receiving a time‐critical clinical diagnosis and many cost benefits for the healthcare provider. First, because the patients are young and not yet fully developed, it is far more difficult for clinicians to make an accurate diagnosis based on their phenotype, meaning a genetic test can be the best way of reaching a confirmatory diagnosis. Also, an early diagnosis provides knowledge to inform clinical management on the best therapeutics to use, which can reduce the time to treatment and improve outcomes. There are also financial benefits to decreasing the number of costly days in the intensive care unit for neonates or children (NICU/PICU) (Farnaes et al., ; Lunke et al, ; Sanford Kobayashi et al., ; Stark, Boughtwood, et al, ; Yeung et al., ). The vital term here is “rapidly” because the health benefit for the patient and cost‐effectiveness the healthcare provider can achieve is determined by the speed to which a diagnosis can be made. The first study to demonstrate the feasibility of performing rapid WGS (rWGS) in a PICU setting was published in 2012 by Saunders and colleagues, who showed it was possible to reach a diagnosis in just 50 h (Saunders et al., ). In comparison, it typically takes 1–6 months following NGS testing to arrive at a diagnosis in most clinical settings. Since this time, more than 20 studies have been published from around the world describing the use of rapid genomic sequencing in over 1500 patients, representing a range of healthcare settings (reviewed in [Stark & Ellard, ]). Two notable randomized clinical trials, NSIGHT1 (Petrikin et al., ) and NICUSeq (Krantz et al., ) have shown that rWGS can be implemented into routine clinical practice and leads to a change of the clinical management of critically ill children. There is now unprecedented evidence to show the clinical utility of this approach and the economic healthcare advantages it offers (see also articles in this series) (Goranitis et al., ). The advances in this field have been made through technical improvements of the sequencing instruments, the use of improved bioinformatic hardware/software, and through an alignment of the disparate experts who come together in such a healthcare setting to deliver the best care possible for their patients. In fact, these advances have resulted in a new world record time of 5 h 2 min for the fastest DNA sequencing technique to sequence an entire human genome and the shortest time from sample receipt to diagnosis of 7 h 18 min (Gorzynski et al., ). CURRENT STATE OF PLAY IN RAPID GENOME SEQUENCING The maturity of rapid genomic sequencing in a critical care setting is such that its translation and implementation into routine clinical practice has been successfully achieved in a growing number of countries such as the United Kingdom, Australia, and the United States. In the United Kingdom, funding for most genomic tests, including rapid genome sequencing, is government‐based and is provided at the national level within the National Health Service (NHS). The NHS in England has implemented rWES for critically ill children since October 2019. This test is for acutely unwell children with a likely monogenic disorder when a diagnosis is required more urgently to aid clinical management, prenatal testing, or pre‐implantation genetic diagnosis. Of 361 children enrolled during the first year, 141 (38%) received a diagnosis. In 133 (94%) patients, the molecular diagnosis influenced clinical management (Stark & Ellard, ). The NHS in Wales is the first service in the United Kingdom to introduce a national diagnostic rWGS service for critically ill newborns and children as a front‐line test. In 2019, the All Wales Medical Genomics Service formed a multidisciplinary working group tasked with designing and implementing this service. New diagnostic testing infrastructure was established and a bespoke diagnostic pipeline to identify causative genetic variants was validated. The “Wales Infants' and childreN's Genome Service” (WINGS) was launched in April 2020. Patients are eligible for the service if a monogenic cause for their illness is suspected, a DNA sample from both biological parents is available, and a timely genetic diagnosis might alter clinical management. The service is available to pediatric and neonatal patients in intensive care units (ICUs) across Wales, and Welsh children in ICUs elsewhere in the United Kingdom (Murch et al., ). The test can be ordered by a NICU or PICU consultant or registrar (equivalent to specialist and trainee) following a telephone discussion with the on‐call clinical genetics team. Forty‐five families have completed testing in the first 2 years of the WINGS service. Pathogenic or likely pathogenic variants have been identified in 17 children. Additionally, in two cases, variants of uncertain significance (VUS) have been reported. Approval to report VUS that are relevant to patient's phenotype and incidental findings must be sought from multidisciplinary teams. These are teams of clinical scientists and consultants from clinical genetics, pediatrics, biochemistry, and other specialties that are involved in the patient's care and who meet ad hoc to discuss more complex genomic results. Mean time to reporting was 9 calendar days (range 3–26 days). These results have had significant health benefits for this patient group, including immediate clinical management changes. The highest diagnostic yields were identified in children with either neurological (57%) or metabolic (60%) phenotypes (where n > 4 patients) (personnel communication). The overall diagnostic yield of 37.5% is similar to previous research projects and other services internationally (French et al., ; Kingsmore et al., ; Mestek‐Boukhibar et al., ). Elsewhere, a pilot quality improvement study “Project Baby Bear” run in California, became the first state‐funded program to use rWGS as a first‐line diagnostic test for critically ill newborns with suspected rare genetic diseases in the United States (Dimmock et al., ). Led by Rady Children's Institute for Genomic Medicine, the study provided rWGS for 178 infants enrolled in California's Medicaid program (MediCal) hospitalized in intensive care with an aim to evaluate the clinical benefits and economic impact of this test. The data, collected in 18 months, showed that rWGS resulted in a diagnostic yield of 43%. The findings led to a change in care in 31% of the solved cases while saving $2.5 million in healthcare costs. Based on the success of Project Baby Bear, the “Ending the Diagnostic Odyssey Act 2021” was introduced which allows all 50 state Medicaid programs to cover rWGS for eligible individuals (Collins, ). In 2016, The Australian Genomics Health Alliance (Australian Genomics) was launched as a national collaborative research partnership of more than 80 organizations. Its aim was to integrate genomics as the standard of care into the Australian healthcare system using a whole‐of‐system approach, building the evidence to inform national health policy (Stark, Schofield, et al., ). The Australian Genomics Acute Care program built upon the prior experience of implementing rWES across two hospitals in 2016–2017. Participants were acutely unwell pediatric inpatients (0–18 years) with suspected monogenic disorders. The study provided a diagnosis for 52.5% patients, changed management of 57% diagnosed patients, and showed that diagnosis by rWES costs half that of diagnosis by usual care (Stark et al., ). A more recent scaled‐up study investigated the feasibility of ultra‐rWES in critically ill pediatric patients with suspected monogenic conditions in the Australian public healthcare system. This multisite study, which included 12 hospitals and 2 laboratories, aimed to deliver genomic results within 5 days to 108 patients. Similarly to the previous study, NICU or PICU patients with a likely monogenic disorder were eligible if they had been referred to the clinical genetics service. Other inpatients were also included if a rapid result was likely to alter clinical management (e.g., organ transplant decisions). The diagnostic yield was 51% and the mean time to report was 3.3 days (Best et al., ; Lunke et al, 2020). In July 2020, the study team received further funding to drive the expansion of this service and transition to WGS. These examples highlight the astonishing progress made in the field of pediatric rapid‐diagnostics and the translation of it from a research endeavor to a routine clinical test. However, implementing a test such as rapid genomic sequencing in a clinical setting still poses a number of challenges (described below) that need to be overcome before it can be adopted more widely. CHALLENGES SURROUNDING RAPID GENOMIC SEQUENCING AND BIOINFORMATICS For rapid genome sequencing to be clinically useful and financially effective, it is imperative that all steps along the workflow are optimized to run smoothly and efficiently. After sample collection, there are certain steps that are difficult to speed up, for example, it takes a set time to extract DNA from blood. Some steps are already optimized, such as the commonly used sequencing library preparation kits purchased from commercial vendors, and other steps can be streamlined using automation, such as liquid handling robots. It is noteworthy to highlight that if rWES is being performed, then the hybridization stage will result in a longer library preparation time compared to rWGS (~2 days for trio rWES vs. ~2.5–3 h for trio rWGS). In all cases, an optimized and well‐communicated sample triage, testing, and analysis workflow is crucial to the efficient processing of the sample through diagnostics, improving turnaround times for patients. Access to an appropriate NGS platform is again essential to the timely processing of the sample, as well as being able to produce sufficient depth of coverage in a cost‐effective manner. In general, a depth of coverage of at least 20× across the genome is required to accurately identify single nucleotide changes. Illumina sequencing machines are commonly used by clinical laboratories and researchers as a standard device, however, several models are available, with differing specifications. For human WGS, the NovaSeq system is recommended, with four flow cells available for use, all with differing capabilities. This ranges from between four and 48 human genomes in a single run, taking between 25 and 44 h, producing up to 3000 Gb of data. Table lists the differing specifications for 100 bp paired‐end reads, but specifications differ again depending on the choice of read length. Therefore, careful planning and management are needed to ensure that the correct flow cells and settings are being used in each case. In addition, advancements in long‐read sequencing technology have also been recently used to demonstrate the use of long reads in rWGS (Goenka et al., ). The output from a genomic sequencing run is a set of fastq files that contain the sequence data for the millions/billions of bases of DNA along with quality score metrics. To take this data and convert it to manageable information on genetic variation, efficient, accurate, and validated bioinformatics analysis pipelines are needed. All pipelines follow the same key steps from quality filtering, then alignment to the reference genome, followed by variant calling, and finally variant annotation (Figure ). These analyses can be computationally intensive and time‐consuming, performing complex tasks such as implementing algorithms to align millions of reads to the three billion base pair human reference genome. Due to this complexity, it is unsurprising that processing of a single genome can take ~36 h, even on a large well‐powered compute cluster (Goranitis et al., ). Choice of software appropriate for the analysis task is key to both accuracy and run time of the pipeline, with a large number of studies published comparing software options (Chen et al., ; Hatem et al., ; Kumaran et al., ; Musich et al., ). Attempts to standardize these approaches have been made, with the best practice guidelines recommended by the Broad Institute for use with their Genome Analysis ToolKit (GATK4), which is commonly used, although there are other available options (Van der Auwera & O'Connor, ). A recent review (Koboldt, ) discusses further options for best practices for clinical variant calling including software choices for aligners and variant callers. In terms of maintaining the accuracy of the pipeline, but in a rapid manner, recent innovations have been made including the development of BWA‐MEM2, a faster version of the popular Burrow–Wheeler Aligner (BWA) software (Vasimuddin et al., ), the closed source DRAGEN™ Bio‐IT Platform ( https://emea.illumina.com/products/by-type/informatics-products/dragen-bio-it-platform.html ) (Illumina) which encompasses all stages of analysis allowing a trio of whole genomes to be processed in around 6 h, and the open source Dragmap version of the DRAGEN aligner ( https://github.com/Illumina/DRAGMAP ) (Illumina), making this computing capability available to all. As with the wet laboratory work, options are available to optimize these bioinformatics processes, such as the utilization of high‐performance compute clusters with a batch‐queuing system, allowing for parallelization of tasks; the use of sophisticated workflow languages, such as nextflow ( https://www.nextflow.io/ ) and snakemake (Molder et al., ); and simple solutions such as networking the sequencers to allow for direct saving of the data to the compute cluster, removing the need for lengthy transfer of raw data, which can also lead to corruption or loss of data. Before any sample going through a bioinformatics pipeline for diagnosis, substantial groundwork is needed to validate the process to ensure accuracy. This encompasses use of knowns, such as genome in bottle samples (Zook et al., ), and in‐house previously identified samples from separate platforms, to calculate the specificity and sensitivity of the pipelines. Care must also be taken to ensure that all potential sample types can be used, that processing is efficient, and that the pipeline is producing usable outputs for clinical scientists. ACGS have published guidelines for best practices in the validation of bioinformatics pipelines (Whiffin et al., ) and Marshall and colleagues have recently published a review on best practices for validation. Once the variant data is in the form of a vcf file (variant call format) it next needs to be annotated with functional information such as variant consequence, the frequency of the variant in the population (Karczewski et al., ), and a range of other metrics that assess the potential of the variant to be pathogenic (Adzhubei et al., ; Kumar et al., ; Lek et al., ; Rentzsch et al., ; Williams et al., ). This annotation step is carried out by specialist software, such as Variant Effect Predictor (McLaren et al., ), which searches a series of predownloaded databases for this information. Armed with this information and the patient's phenotypic information, a diagnosis is made by a clinical geneticist/clinical scientist according to set guidelines (Richards et al., ) agreed to by the clinical diagnostic community. However, without an appropriate filtering strategy (Figure ), the number of variants could be as high as several million, a completely unmanageable number for assessment. Filtering strategies include applying hard cut offs based on metrics such as base quality, mapping quality, and coverage; removal of noncoding variants; filtering by variant consequence; filtering by prevalence in the population using GnomAD; and filtering by inheritance pattern where trios are available (Wright et al., ). The biggest challenge is to narrow down the list of variants to a manageable amount, ensuring rapid analysis by the clinical scientists, but while also ensuring that any potential causative variants are not removed. With appropriate filtering, clinical scientists can be left with around 25–50 variants to manually assess. This filtering can be aided by the addition of “white lists” of known clinical, pathogenic variants, such as the use of ClinVar variants. This strategy was suggested in the PAGE study looking at pre‐natal rWES for fetal diagnostics (Lord et al., ). In addition, a “gene panel” approach is often applied, focusing only on those genes associated with the phenotype or condition. These gene panels are readily available from resources such as the Genomics England PanelApp (Martin et al., ) and are regularly reviewed and updated, however, they do restrict the analysis to known disease‐associated genes and can, therefore, miss variants in genes with novel associations. A similar approach is taken to removal of noncoding variants, unless they are already known to have a clinical impact, again risking missing novel pathogenic noncoding variants. Reaching a diagnosis requires an understanding of how the implicated gene might impact the patient's phenotype, as well as how the variant identified might affect the function of the protein, requiring extensive biological knowledge by clinical scientists. The assignment of a diagnosis can be another time‐consuming step as each variant passing the filtering criteria needs to be interpreted individually. To speed up this process, progress has been made in the use of automated machine‐learning methods that combine the patient's phenotypic information with the details from the diagnostic guidelines, this approach has been shown to result in a time saving of ~22 h (Clark et al., ). However, this is still an active area of research and is not widely implemented. With the addition of some of these time‐saving capabilities, a sample can go from receipt at the diagnostic center, to a potentially classified variant in just 3 days (Figure ). In summary, sample preparation, sequencing, and bioinformatics remain challenging area in rapid whole genome diagnostics. Careful planning and thorough validation are required to ensure that all stages within the sample pathway are accurate and optimized. ETHICAL AND INCIDENTAL FINDINGS CHALLENGES Alongside the technical challenges of implementing rapid genomic sequencing, there are also ethical and practical challenges to offering such services. Ethical issues can include obtaining informed consent, the discovery of incidental findings unrelated to the reason for testing, the privacy of genomic data, the possibility of discrimination based on the findings, the potential impact on the parent‐child relationship, and the prioritization of resources in a publicly funded health service. Stark and Ellard discussed these ethical challenges in their recent review, acknowledging that many of these issues are common to any genomic sequencing, but may be compounded by the clinical situations in which rapid genomic sequencing is being offered, when individuals or their carers are being asked to make decisions about genomic testing quickly and when they or their relatives are in vulnerable situations. This makes it harder to achieve informed consent, though Stark and Ellard's review found that most parents consenting for rapid genome sequencing for their unwell newborn child did not regret testing and believed it to be useful. Generally, the parents' focus was on diagnosis and rapid genome sequencing provided this opportunity, though they did report some challenges associated with consenting for this testing and many felt overwhelmed. Similarly, healthcare professionals viewed rapid genomic sequencing as being very helpful for clinical diagnosis, though generally felt that this should be led by the genetics team who have greater knowledge about genomic testing as well as expertise in providing information and support to individuals making these decisions and adapting to the results (Stark & Ellard, ). One key issue is the possibility of identifying incidental or additional findings unrelated to the symptoms under investigation, which indicate that the individual and potentially their family members are at risk of other health conditions. It can be argued that this is a benefit for medically actionable conditions, as appropriate management can be put in place to reduce the risk or achieve better outcomes for the individual, and the American College of Medical Genetics and Genomics (ACMG) has published a list of conditions for which incidental findings from genomic screening should be reported (Miller et al., ). It is worth noting that this list is not definitive and will change over time with increases in our understanding and changes in the treatment and management options available, so individuals tested at different times may not be tested for the same conditions. For some conditions, it has been argued that testing should go even further, with the possibility of systematically screening children for familial hypercholesterolemia (an inherited form of high cholesterol that can lead to heart attacks and strokes) to identify their parents who may be at risk (Wald & Wald, ). However, incidental findings may also relate to conditions that are not medically actionable (i.e., there is no screening or management available to improve health outcomes), in which case the balance of benefits and harms in reporting these findings is more debatable. With Huntington's disease, a degenerative condition, only around a fifth of those with a 50% chance of having the causative genetic variation chose to have presymptomatic testing to find out whether or not they would develop the condition in later life (Baig et al., ). Therefore, though many think that they would like to know predictive information about their future health (Middleton et al., ), when individuals are faced with finding out this kind of information about the future, many preferred not to know. Those being offered genomic testing to try to identify a diagnosis for their seriously unwell relative are unlikely to think carefully about whether or not they would want to know this kind of incidental information. The chance of identifying incidental findings is influenced by the filtering strategy used as part of the pathway, as discussed above. While the whole genome is sequenced, the data analysis can be adapted as desired. For example, a gene panel approach can be used, only looking at genes known to be associated with a genetic disease or even only those genes associated with a particular phenotype. However, it could be argued that this misses an opportunity to identify medically actionable genetic conditions (such as those on the ACMG list), leaving individuals unaware of their risk, with the resulting impact on health outcomes in later life and on healthcare costs. It would also be important to ensure that patients and their healthcare professionals are aware that, while the genome has been sequenced, it has not all been analyzed, so some genomic variants will have been excluded. Even if testing covered genes associated with genetic disease as well as these medically actionable conditions, this testing strategy relies on current genomics knowledge and means that novel causes of genetic conditions will not be identified, reducing diagnostic yield. Therefore, a gene agnostic approach may be preferable, identifying potentially pathogenic variants in all parts of the genome, with the associated risk of incidental findings. A slightly modified version could be considered, excluding particular genes associated with disease which is not medically actionable to maintain a higher diagnostic yield but reduce the chance of these findings. However, again, it may be difficult to reach consensus as to which genes should be excluded. If they are to be excluded, it may be more practical to do this at the data analysis stage, rather than carrying out a full analysis and not reporting these findings. However, patients and their families may start to request their raw genomic data for analysis using various online services, so these incidental findings may be identified elsewhere. Implementation of rapid genomic testing pathways needs to include consideration of who will be tested, what will be tested, and the associated clinical pathway. As outlined above in the discussion of the current state of play, testing is offered to those who are acutely ill with a likely monogenic disorder where testing is likely to make a difference to management. In addition, DNA samples have been required from both parents to enable analysis, which has implications for equality of access, as this excludes some patients from testing if both parents are not available. However, as the technology moves beyond the pilot stage into routine practice and our knowledge and analysis improve, it becomes increasingly possible that trio analysis will not be essential. Clinical judgment is required to target testing appropriately to these patients, and both time and expertise are needed to provide this service, which has implications for workforce planning so services looking to implement rWGS will need to consider how this can be managed. As with many specialties, it may be necessary both to obtain expertise from other hospitals or areas, as well as upskilling local staff to meet the needs of patients. Genomic testing should be offered to patients by healthcare professionals, such as genetic counselors, with both a good understanding of genomic testing and also the skills to help individuals with decision‐making. This will facilitate the provision of informed consent for testing, though it could be argued that it is not possible to obtain fully informed consent due to the breadth of possible findings that can arise. These staffs need to be well informed about the testing being offered, the potential findings that could be obtained and also what may not be revealed by testing. They also need to have the skills to deliver the results, and provide support to help individuals and families assimilate and adapt to their results. If incidental findings are discovered in infants and children, the parents will be given this information and it will be important to consider how this will be provided to the child themselves as they grow older, to avoid a further ethical issue of others knowing about a risk of which the individual themselves is unaware. Again, healthcare professionals giving the results should support parents with considering how and when the information will be passed to the child, and may need to work with families to ensure that they have the skills, knowledge, and intention of passing this information to the individual as they become older. As highlighted by this discussion, it is important for healthcare professionals offering WGS to have a good understanding of what testing is being carried out, what may be found and missed, and the ethical issues associated with this. Guttmacher et al.'s review notes a range of studies indicating a lack of genomics knowledge and confidence in nongenetics medical professionals from a range of countries and specialties. Therefore, there is a need for genomics education both of medical and other healthcare professionals. In the United Kingdom, genomics education has been incorporated into medical school curricula and, within England, a national approach to genomic education for health professionals is being coordinated by Health Education England's Genomic Education Programme (Slade et al., ). However, it will take time to upskill all healthcare professionals to provide genomic testing across the UK's National Health Service. FUTURE RAPID‐TESTING STRATEGIES Finally, it is worth looking to the future to think about what additional rapid tests could be translated to augment the analysis of the genomic data. The reason for this is based on the fact that while the application of rapid genomic sequencing has greatly facilitated the identification of disease‐associated genetic variants in critically ill children, around two‐thirds/half of patients remain undiagnosed. This is partly due to challenges in interpretation of genomic variants and our limited understanding of how variants impact gene expression and protein abundance, as well as protein structure and interactions but also due to our failure to identify some types of disease‐causing variants, such as deep intronic variants, noncoding triplet repeats, variants in enhancer and promoter regions, and larger structural variants. When rapid genomic sequencing returns inconclusive results, analyzing multiple layers of biological activity together can help us better understand the functional aspects of genomic variants and their role in disease. The functional genomics techniques that can be utilized for this purpose are transcriptomics, epigenomics, proteomics, or metabolomics. For example, metabolic and biochemical tests can guide genomic analysis or provide insights in the pathogenicity of variants within genes involved in metabolic pathways and are already routinely used to aid clinical diagnosis. For the purpose of this review, we will focus on the use of transcriptomics as a complimentary test to genomic sequencing, as there is already mounting evidence to show its potential utility. Studying the transcriptome, RNA expressed from the genome, provides valuable pathogenicity information on sequence variants. RNA studies can validate candidate splice‐disrupting mutations, confirm whether candidate truncating variants cause nonsense‐mediated decay, and identify splice‐altering variants in both exonic and deep intronic regions. The main limitation of transcriptome studies is an accessible tissue source with suitable expression levels of clinically relevant genes. For patients, this often means additional invasive sampling such as a skin biopsy followed by establishing fibroblast cell line cultures. Fibroblasts express the majority of known disease genes (Yépez et al., ) and can be used to derive pluripotent stem cells which express over 27,000 genes (Bonder et al., ). In a rapid setting culturing might not be possible due to time constraints, therefore, blood sampling might be preferred. Multiple studies support the use of blood as a viable source of RNA material in different RD including neurological disorders. To help overcome the limitation of tissue specificity, resources are now available that help identify clinically accessible tissues (i.e., MAJIQ‐CAT [Aicher et al., ] and asses the feasibility of RNA‐sequencing, i.e., MRSD [Rowlands et al., ]). Initially, targeted studies using reverse transcription polymerase chain reaction (RT‐PCR) have been utilized to functionally validate and reclassify VUSe for some time now (Le Quesne Stabej et al., ; Wai et al., ). More recently, the reduced cost of RNA sequencing (RNAseq) make it an equally viable option not only for confirmation of candidate variants, but also to perform a transcriptome‐guided genomic analysis. The use of transcriptomics as a secondary diagnostic tool has been extensively reviewed elsewhere. Briefly, transcriptome‐wide RNA‐seq data can be used to streamline and to direct downstream analysis by prioritization of causative variants that have been overlooked or completely filtered out by genomic sequencing. RNAseq has been shown to identify pathogenic variants that have been missed by DNA‐based testing alone, improving diagnostic yield by 7.5%–36% across a diverse range of rare disorders (Cummings et al., ; Frésard et al, ; Gonorazky et al., ; Lee et al., ; Maddirevula et al., ; Murdock et al., ; Rentas et al., ; Yépez et al., , ) One notable study (Murdock et al., ) used a novel transcriptome‐directed analysis approach to provide diagnoses for patients with rare Mendelian disorders. Instead of looking at candidate genes derived from DNA sequencing, the authors suggest starting with RNAseq to direct the prioritization of DNA variants. The study demonstrates the clinical application of the Detection of RNA Outlier Pipeline (DROP) (Yépez et al., ), an automated workflow that detects genes with aberrant expression, aberrant splicing, and mono‐allelic expression of genes in whole blood and fibroblasts. This approach resulted in a diagnostic yield of 17% in patients with a wide range of conditions including neurological, musculoskeletal, and immune phenotypes to detect aberrant expression and splicing. CONCLUSIONS The evolution of genomic sequencing since the completion of the Human Genome Project has transformed our understanding of how human genetic variation can lead to RD's. Within the clinical setting, NGS techniques are routinely used to diagnose RD patients, with the recent 100,000 Genomes Project demonstrating a diagnostic rate of 25% in patients spanning a wide‐spectrum of clinical phenotypes (Smedley et al., ). Nonetheless, there are still barriers to implementing genomic sequencing for clinical diagnostics that include costs, availability of trained personnel, and the huge bioinformatic/compute infrastructure required to process, interpret and store patient's genomic data in a safe environment. It is thus necessary to identify areas where the implementation of genomic sequencing can have a large positive impact. We argue that, given the evidence described above, the use of rapid genomic sequencing to diagnose acutely ill children with a suspected monogenic disease is such an environment. There is compelling evidence to show that being able to rapidly diagnose such children can lead to improvements in clinical management. The rapid nature of the tests also leads to substantial healthcare cost reductions for the healthcare provider as the children can be treated quicker and moved to less high‐dependency beds. In the future, we believe rapid genomic sequencing will become common practice for healthcare providers across the globe, and advances in technology will improve the time to diagnosis as well as costs. Orthogonal techniques such as RNAseq will augment the genomic data and undoubtedly improve diagnostic rates even further. There is, therefore, much anticipation to see how this exciting field will evolve and the promise it holds to improve the diagnosis for critically ill children. The authors declare no conflict of interest. |
Pediatric Endocrinology Milestones 2.0—guide to their implementation | 8b5a6390-b0f9-494d-83c5-459b084f00e1 | 10734147 | Physiology[mh] | The Accreditation Council for Graduate Medical Education (ACGME), jointly with the American Board of Medical Specialties (ABMS), established the six core domains of clinical competency in 1999, providing a framework for physician training and assessment of trainee progress across all specialties . These core competencies include patient care (PC), medical knowledge (MK), interpersonal and communication skills (ICS), practice-based learning and improvement (PBLI), professionalism (PROF), and systems-based practice (SBP) . To facilitate the integration of competencies into individual subspecialties, the Milestones were introduced in 2013 as part of the Next Accreditation System . For each subspecialty, milestones describe stepwise trajectories under each of the six core competencies and provide examples to guide the development of physicians in graduate medical education . In addition to serving as a structure to conceptualize physician development, milestones are used to assess trainee competence and progression throughout their post-graduate clinical training. The integration of the Milestones into training has faced limitations . Feedback obtained by the ACGME had several common themes. First, the original Milestones were lengthy and complex, making them difficult to assess and time-consuming to complete. The multifaceted descriptors of individual milestones also made it difficult to assign levels to trainees who did not meet all characteristics. Further, the complicated language of the milestones led to variations in implementation, thus preventing a shared mental model among programs. Finally, the examples used in the sub-competencies were often not applicable across all specialties . To address these common concerns, the ACGME launched the Milestones 2.0 project in 2016 with the goal of developing harmonized, consistent, and applicable milestones for each subspecialty . The ACGME first developed cross-specialty “harmonized” milestones for ICS, PBLI, PROF, and SBP, ensuring that all specialties would now use the same descriptions for each of these milestones. To create these milestones, the ACGME reviewed feedback provided on the original milestones and data published regarding the Milestones implementation and limitations. They then created development groups of key stakeholders (content experts, directors, interprofessional team members, and other faculty) to develop unified milestones for the above domains and sub-competencies. Public comment was then invited on these milestones prior to finalization . The ACGME then assembled working groups for each specialty to develop specialty specific content for the medical knowledge (MK) and patient care (PC) competencies as well as a specialty specific supplemental guide. The goals of this project were to revise the Milestones to be more understandable and user-friendly while creating a shared mental model among program leadership, faculty, and fellows. In a review of shared mental models in GME, ACGME states that “a shared mental model refers to a team’s common understanding of a their task, interpretation of their environment, and required collaboration . ” Shared mental models represent one strategy for addressing some of the common limitations of the original Milestones, namely the variability among evaluators of an individual. Therefore, one of the major goals of Milestones 2.0 was to create a useable shared mental model in order to provide more consistent constructive feedback while decreasing inter-evaluator variability. Additionally, the group aimed to include subspecialty focused skills and examples in the Milestones and exclude any sub-competencies that are irrelevant to the field of pediatric endocrinology. The supplemental guide was then developed with the goal of providing additional support for the implementation of the Milestones into practice. Finally, much of the wording in the original milestones focused on negative aspects of a fellow’s performance or indicated goals they had not yet obtained rather than highlighting their progress. Therefore, the working group aimed to reword milestones to promote a growth mindset by focusing on the skills and goals that have been reached while identifying areas for continued improvement. A “growth mindset” refers to the shared belief that learners are capable of improvement with appropriate coaching and effort . Identification of the working group The working group was assembled by identifying representatives from the Pediatric Endocrine Society Training Committee and soliciting applications from the community of Pediatric Endocrinologists through the ACGME, Pediatric Endocrine Society, the Association of Pediatric Program Directors, Council of Pediatric Subspecialties, and the American Board of Pediatrics. The committee comprised of twelve pediatric endocrinologists involved with pediatric endocrine education, including four current fellowship program directors, three former fellowship directors, one current associate fellowship program director, two current fellows, and two additional practicing pediatric endocrinologists with experience and interest in education and fellow assessment. Additionally, three representatives from ACGME facilitated the group’s reviews and discussions. Governing principles The working group identified several governing principles for the development and assessment of the competencies and the Supplemental Guide using ACGME’s reports on the common concerns as a guide for areas of improvement. These principles included ease of interpretation, applicability to the role of a Pediatric Endocrine Fellow, and clear progression of skills across the Milestones. By using these principles, the working group aspired to produce a tool (the Milestones) that could more easily be incorporated and utilized by the busy practicing clinician while also providing valuable feedback to trainees. The working group reviewed Milestones 2.0 from other subspecialties within pediatrics and internal medicine as a model for the development of pediatric endocrine specific sub-competencies and milestones. Milestones development To guide the identification of sub-competencies to be evaluated, the working group reviewed the Pediatric Subspecialty Milestones 1.0 (currently used to assess pediatric endocrinology fellows), Milestones 2.0 for Internal Medicine, Endocrinology, and Pediatrics, and drafts of Milestones 2.0 for other pediatric subspecialties. The group met virtually to identify sub-competencies appropriate for pediatric endocrinology, then convened in person to develop language for the levels for each sub-competency. ACGME representatives facilitated group discussions, during which the group reviewed the above materials and came to a consensus for each sub-competency. The Milestones were designed to allow for growth of the fellow over the course of the fellowship. Supplemental guide development The working group developed a supplemental guide to aid in the interpretation of the sub-competencies and milestones as well as provide curricular opportunities to support the achievement of the milestones and potential assessment tools. Community feedback Once the sub-competencies, individual milestones, and supplemental guide were developed, the draft was released to the public for comment. Comments were solicited through the Pediatric Endocrine Society, Association of Pediatric Program Directors, and the ACGME. Comments were reviewed and suggested changes to the sub-competencies and milestones were made. The working group was assembled by identifying representatives from the Pediatric Endocrine Society Training Committee and soliciting applications from the community of Pediatric Endocrinologists through the ACGME, Pediatric Endocrine Society, the Association of Pediatric Program Directors, Council of Pediatric Subspecialties, and the American Board of Pediatrics. The committee comprised of twelve pediatric endocrinologists involved with pediatric endocrine education, including four current fellowship program directors, three former fellowship directors, one current associate fellowship program director, two current fellows, and two additional practicing pediatric endocrinologists with experience and interest in education and fellow assessment. Additionally, three representatives from ACGME facilitated the group’s reviews and discussions. The working group identified several governing principles for the development and assessment of the competencies and the Supplemental Guide using ACGME’s reports on the common concerns as a guide for areas of improvement. These principles included ease of interpretation, applicability to the role of a Pediatric Endocrine Fellow, and clear progression of skills across the Milestones. By using these principles, the working group aspired to produce a tool (the Milestones) that could more easily be incorporated and utilized by the busy practicing clinician while also providing valuable feedback to trainees. The working group reviewed Milestones 2.0 from other subspecialties within pediatrics and internal medicine as a model for the development of pediatric endocrine specific sub-competencies and milestones. To guide the identification of sub-competencies to be evaluated, the working group reviewed the Pediatric Subspecialty Milestones 1.0 (currently used to assess pediatric endocrinology fellows), Milestones 2.0 for Internal Medicine, Endocrinology, and Pediatrics, and drafts of Milestones 2.0 for other pediatric subspecialties. The group met virtually to identify sub-competencies appropriate for pediatric endocrinology, then convened in person to develop language for the levels for each sub-competency. ACGME representatives facilitated group discussions, during which the group reviewed the above materials and came to a consensus for each sub-competency. The Milestones were designed to allow for growth of the fellow over the course of the fellowship. The working group developed a supplemental guide to aid in the interpretation of the sub-competencies and milestones as well as provide curricular opportunities to support the achievement of the milestones and potential assessment tools. Once the sub-competencies, individual milestones, and supplemental guide were developed, the draft was released to the public for comment. Comments were solicited through the Pediatric Endocrine Society, Association of Pediatric Program Directors, and the ACGME. Comments were reviewed and suggested changes to the sub-competencies and milestones were made. The final product of the working group was the Pediatric Endocrine Milestones 2.0 and the Supplemental Guide, copies of which are included as . The Milestones 2.0 were officially implemented in July 2023. Major Changes in Milestones 2.0 Pediatric Endocrinology Milestones 2.0 brings important changes in sub-competency content as well as milestone application and usability. A major goal of Milestones 2.0 is to make the milestones more understandable and user-friendly and promote the creation of a shared mental model among program leadership, faculty, and fellows . This improved tool may allow fellows and faculty to track fellows’ development throughout fellowship and identify areas of strengths and weakness to be addressed early in training. Changes to milestone complexity and wording Milestones 1.0 frequently included lengthy and complex descriptions making their application arduous . Additionally, educational jargon made interpretations challenging for those without a strong background in educational principles. In Milestones 2.0, descriptions have been greatly shortened and jargon has been removed. Tables and (below) provide examples comparing the original milestones for Patient Care (PC) to their revised forms in milestones 2.0. These comparisons are examples of the simplification of the wording and the removal of the educational jargon. While the total number of sub-competencies for pediatric endocrinology has increased from 21 to 24, we expect that milestone assignments should be more straightforward and therefore ideally faster to complete. Changes to phrasing and interpretation of milestone levels Among the most important changes in Milestones 2.0 is how the milestone levels are phrased and applied to fellows. Milestones 1.0 created five levels based on the Dreyfus model of adult skill acquisition . These levels were meant to follow trainees from novice (level 1) to expert (level 5). Because level 5 represented individuals who were experts in pediatric endocrinology, it represented an “aspirational” target that would rarely, if ever, be achieved by fellows. It was also unclear if fellows should “reset” to level 1 after achieving 3 s and 4 s in a particular milestone at the time of residency graduation. Milestone 2.0 intends to document developmental progression during fellowship, rather than the entire training or career trajectory. Thus, it is expected that many fellows will enter fellowship at a level 1 (novice fellow) and subsequently progress at varying rates to level 2 (advanced beginner), level 3 (competent) and level 4 (proficient). This is especially true for the patient care (PC) and medical knowledge (MK) competencies, which are now specific to pediatric endocrinology. While not a graduation requirement, it is expected that most fellows will achieve a level 4 in most milestones prior to graduation. Level 5 now represents an expert fellow, corresponding to a fellow performing exceptionally in a given sub-competency. For a given sub-competency, ACGME provides guidance that appropriately 8–10% of fellows should achieve a level 5 prior to graduation. The phrasing of Milestones 2.0 has also been changed to promote a growth mindset. Milestones 1.0 frequently used negative language that emphasized skills that a fellow was not doing or was doing incorrectly. Milestones 2.0 focuses on what the fellow is correctly doing at each developmental stage. For example, Milestones 1.0, sub-competency PM, Level 2 includes the description “is unable to focus on key information so conclusions are often from arbitrary, poorly prioritized, and time-limited information gathering.” In Milestones 2.0, Level 2 of the same sub-competency reads “Develops and implements management plans that require modification for routine endocrine presentations.” It is recognized that some fellows may not yet have achieved level 1 when they enter fellowship. Therefore, Milestones 2.0 retains the option of selecting “not yet completed level 1” for assessments. Creation of harmonized milestones A major change for Milestones 2.0 is the creation of “harmonized” milestones in four competencies: Professionalism (PROF), Practice-based learning and improvement (PBLI), Interpersonal and communication skills (ICS), and Systems-based practice (SBP). In Milestones 1.0, specialties created their own content for each competency, leading to highly variable themes and descriptions . These inconsistencies created challenges in comparing milestone progression among specialties and in sharing learning tools and resources. Having recognized that PROF, PBLI, ICS, and SBP have common, overlapping themes for most specialties, the ACGME assembled four diverse groups to develop cross-specialty “harmonized” milestones for Milestones 2.0 . Pediatric Endocrinology Milestones 2.0 adopts harmonized milestones in each of these four competencies. Creation of milestones specific to pediatric endocrinology Pediatric Endocrinology Milestones 2.0 modifies the sub-competencies and milestones for Patient Care (PC) and Medical Knowledge (MK) to be more tailored to pediatric endocrinology. The changes to the PC sub-competency milestones are outlined in Table below. Milestones 1.0 included PC1: Provide transfer of care that ensures seamless transitions; PC2: Make informed diagnostic and therapeutic decisions that result in optimal clinical judgment; PC3: Develop and carry out management plan and PC4. Provide appropriate role modeling. In Milestones 2.0 this was changed to PC1: History; PC2: Physical Exam; PC3: Patient Management; PC4: Diagnostic Testing (including Labs, imaging, and functional testing) and PC5: Clinical Consultation. Each of these sub-competencies then had milestone language specific to pediatric endocrine fellowship training with the supplementary guide illustrating examples in a particular scenario. A fifth PC sub-competency of consultation was added as it was felt to be a core skill acquired during fellowship training. Many other subspecialty Milestones 2.0 incorporate the consultation sub-competency, including the adult endocrinology Milestones. Table (below) outlines the changes to the MK sub-competencies. Milestones 1.0 was limited to MK1: locate, appraise, and assimilate evidence from scientific studies related to their patients’ health problems. The milestone language under this sub-competency was long and multipronged, making assessments challenging. In Milestones 2.0 MK is separated into three sub-competencies: MK1: Physiology and Pathophysiology, MK2: Clinical Reasoning; MK3: Therapeutics (Behavioral, Medications, Technology, Radiopharmaceuticals). This change intuitively makes more sense and will allow fellows and faculty to identify specific areas of strength and need for improvement. New sub-competency concepts Milestones 2.0 addresses several topics that were not emphasized in Milestones 1.0. SBP now includes a specific sub-competency on population and community health that incorporates the concept of health disparities. A separate SBP sub-competency focuses on patient safety, which was previously combined with medical errors and inter-professional teamwork in Milestones 1.0. Additionally, a new PROF sub-competency centers on the concept of well-being. This sub-competency does not evaluate a fellow’s personal well-being but instead recognizes and emphasizes the importance of understanding factors that affect fellow and physician well-being . Creation of the supplemental guide Finally, an important addition to Milestones 2.0 is the creation of a Supplemental Guide that clarifies the intentions of the working group for each milestone. The guide will be available for program directors, clinical competency committees (CCCs), and fellows to promote a shared mental model, which is one of the primary goals of Milestones 2.0. The Supplemental Guide is available as a word document as well as PDF so that individual programs can edit the guide to make it more meaningful to their program. The Supplemental Guide includes five sections for each sub-competency: 1) the overall intent for the sub-competency, 2) a general example for each level, 3) suggested assessment tools to be used by programs in determining level, 4) curriculum mapping (left blank as it is to be completed by the individual program), and 5) notes or resources. The examples included for each level are not comprehensive nor are they indicative of a specific requirement. Instead, the examples are a conversation starting point in creating the shared mental model. What is the same Despite many changes with Milestones 2.0, some key concepts remain the same. Fellows should be assigned a milestone level that fits their current performance, regardless of their year in fellowship. A fellow should have met the criteria of their assigned level and those of the preceding level(s). The ACGME has no level requirement that a fellow must achieve to graduate. Instead, graduation readiness is determined by the fellow’s program director, scholarly oversight committee, and CCC . Similarly, Milestones 2.0 is not part of the endocrinology certification eligibility requirements established by the American Board of Pediatrics (ABP) and milestone levels are not reported to the ABP. Finally, the milestone set is intended to monitor fellow progression over extended periods of time. Therefore, it has limited utility in short rotations of 2–8 weeks . Pediatric Endocrinology Milestones 2.0 brings important changes in sub-competency content as well as milestone application and usability. A major goal of Milestones 2.0 is to make the milestones more understandable and user-friendly and promote the creation of a shared mental model among program leadership, faculty, and fellows . This improved tool may allow fellows and faculty to track fellows’ development throughout fellowship and identify areas of strengths and weakness to be addressed early in training. Milestones 1.0 frequently included lengthy and complex descriptions making their application arduous . Additionally, educational jargon made interpretations challenging for those without a strong background in educational principles. In Milestones 2.0, descriptions have been greatly shortened and jargon has been removed. Tables and (below) provide examples comparing the original milestones for Patient Care (PC) to their revised forms in milestones 2.0. These comparisons are examples of the simplification of the wording and the removal of the educational jargon. While the total number of sub-competencies for pediatric endocrinology has increased from 21 to 24, we expect that milestone assignments should be more straightforward and therefore ideally faster to complete. Among the most important changes in Milestones 2.0 is how the milestone levels are phrased and applied to fellows. Milestones 1.0 created five levels based on the Dreyfus model of adult skill acquisition . These levels were meant to follow trainees from novice (level 1) to expert (level 5). Because level 5 represented individuals who were experts in pediatric endocrinology, it represented an “aspirational” target that would rarely, if ever, be achieved by fellows. It was also unclear if fellows should “reset” to level 1 after achieving 3 s and 4 s in a particular milestone at the time of residency graduation. Milestone 2.0 intends to document developmental progression during fellowship, rather than the entire training or career trajectory. Thus, it is expected that many fellows will enter fellowship at a level 1 (novice fellow) and subsequently progress at varying rates to level 2 (advanced beginner), level 3 (competent) and level 4 (proficient). This is especially true for the patient care (PC) and medical knowledge (MK) competencies, which are now specific to pediatric endocrinology. While not a graduation requirement, it is expected that most fellows will achieve a level 4 in most milestones prior to graduation. Level 5 now represents an expert fellow, corresponding to a fellow performing exceptionally in a given sub-competency. For a given sub-competency, ACGME provides guidance that appropriately 8–10% of fellows should achieve a level 5 prior to graduation. The phrasing of Milestones 2.0 has also been changed to promote a growth mindset. Milestones 1.0 frequently used negative language that emphasized skills that a fellow was not doing or was doing incorrectly. Milestones 2.0 focuses on what the fellow is correctly doing at each developmental stage. For example, Milestones 1.0, sub-competency PM, Level 2 includes the description “is unable to focus on key information so conclusions are often from arbitrary, poorly prioritized, and time-limited information gathering.” In Milestones 2.0, Level 2 of the same sub-competency reads “Develops and implements management plans that require modification for routine endocrine presentations.” It is recognized that some fellows may not yet have achieved level 1 when they enter fellowship. Therefore, Milestones 2.0 retains the option of selecting “not yet completed level 1” for assessments. A major change for Milestones 2.0 is the creation of “harmonized” milestones in four competencies: Professionalism (PROF), Practice-based learning and improvement (PBLI), Interpersonal and communication skills (ICS), and Systems-based practice (SBP). In Milestones 1.0, specialties created their own content for each competency, leading to highly variable themes and descriptions . These inconsistencies created challenges in comparing milestone progression among specialties and in sharing learning tools and resources. Having recognized that PROF, PBLI, ICS, and SBP have common, overlapping themes for most specialties, the ACGME assembled four diverse groups to develop cross-specialty “harmonized” milestones for Milestones 2.0 . Pediatric Endocrinology Milestones 2.0 adopts harmonized milestones in each of these four competencies. Pediatric Endocrinology Milestones 2.0 modifies the sub-competencies and milestones for Patient Care (PC) and Medical Knowledge (MK) to be more tailored to pediatric endocrinology. The changes to the PC sub-competency milestones are outlined in Table below. Milestones 1.0 included PC1: Provide transfer of care that ensures seamless transitions; PC2: Make informed diagnostic and therapeutic decisions that result in optimal clinical judgment; PC3: Develop and carry out management plan and PC4. Provide appropriate role modeling. In Milestones 2.0 this was changed to PC1: History; PC2: Physical Exam; PC3: Patient Management; PC4: Diagnostic Testing (including Labs, imaging, and functional testing) and PC5: Clinical Consultation. Each of these sub-competencies then had milestone language specific to pediatric endocrine fellowship training with the supplementary guide illustrating examples in a particular scenario. A fifth PC sub-competency of consultation was added as it was felt to be a core skill acquired during fellowship training. Many other subspecialty Milestones 2.0 incorporate the consultation sub-competency, including the adult endocrinology Milestones. Table (below) outlines the changes to the MK sub-competencies. Milestones 1.0 was limited to MK1: locate, appraise, and assimilate evidence from scientific studies related to their patients’ health problems. The milestone language under this sub-competency was long and multipronged, making assessments challenging. In Milestones 2.0 MK is separated into three sub-competencies: MK1: Physiology and Pathophysiology, MK2: Clinical Reasoning; MK3: Therapeutics (Behavioral, Medications, Technology, Radiopharmaceuticals). This change intuitively makes more sense and will allow fellows and faculty to identify specific areas of strength and need for improvement. Milestones 2.0 addresses several topics that were not emphasized in Milestones 1.0. SBP now includes a specific sub-competency on population and community health that incorporates the concept of health disparities. A separate SBP sub-competency focuses on patient safety, which was previously combined with medical errors and inter-professional teamwork in Milestones 1.0. Additionally, a new PROF sub-competency centers on the concept of well-being. This sub-competency does not evaluate a fellow’s personal well-being but instead recognizes and emphasizes the importance of understanding factors that affect fellow and physician well-being . Finally, an important addition to Milestones 2.0 is the creation of a Supplemental Guide that clarifies the intentions of the working group for each milestone. The guide will be available for program directors, clinical competency committees (CCCs), and fellows to promote a shared mental model, which is one of the primary goals of Milestones 2.0. The Supplemental Guide is available as a word document as well as PDF so that individual programs can edit the guide to make it more meaningful to their program. The Supplemental Guide includes five sections for each sub-competency: 1) the overall intent for the sub-competency, 2) a general example for each level, 3) suggested assessment tools to be used by programs in determining level, 4) curriculum mapping (left blank as it is to be completed by the individual program), and 5) notes or resources. The examples included for each level are not comprehensive nor are they indicative of a specific requirement. Instead, the examples are a conversation starting point in creating the shared mental model. Despite many changes with Milestones 2.0, some key concepts remain the same. Fellows should be assigned a milestone level that fits their current performance, regardless of their year in fellowship. A fellow should have met the criteria of their assigned level and those of the preceding level(s). The ACGME has no level requirement that a fellow must achieve to graduate. Instead, graduation readiness is determined by the fellow’s program director, scholarly oversight committee, and CCC . Similarly, Milestones 2.0 is not part of the endocrinology certification eligibility requirements established by the American Board of Pediatrics (ABP) and milestone levels are not reported to the ABP. Finally, the milestone set is intended to monitor fellow progression over extended periods of time. Therefore, it has limited utility in short rotations of 2–8 weeks . Ways to implement milestones into practice The new ACGME common program requirements state that milestones are to be incorporated into the semiannual evaluation process. Following determination by the CCC, fellows should receive feedback on milestone levels as these may be useful to identify areas of strength and weakness and to establish learning plans. Milestones can also be utilized for fellow self-assessment or to monitor the areas for improvement in a program. Programs may choose to have fellows complete a self-assessment of milestone levels each time the CCC is going to meet. The program director and fellow can then compare both sets of assessment which may be helpful for both the program and the resident. The program will have insight into the fellow’s understanding of their knowledge skills and attitudes and the fellow will be able to calibrate their own awareness. Similarly, CCCs can review the milestones of all of their fellows to determine if there are common areas in which their trainees are not progressing as expected, which could represent areas in which their fellowship should focus on improving education. The new ACGME common program requirements state that milestones are to be incorporated into the semiannual evaluation process. Following determination by the CCC, fellows should receive feedback on milestone levels as these may be useful to identify areas of strength and weakness and to establish learning plans. Milestones can also be utilized for fellow self-assessment or to monitor the areas for improvement in a program. Programs may choose to have fellows complete a self-assessment of milestone levels each time the CCC is going to meet. The program director and fellow can then compare both sets of assessment which may be helpful for both the program and the resident. The program will have insight into the fellow’s understanding of their knowledge skills and attitudes and the fellow will be able to calibrate their own awareness. Similarly, CCCs can review the milestones of all of their fellows to determine if there are common areas in which their trainees are not progressing as expected, which could represent areas in which their fellowship should focus on improving education. The Milestones were developed to be an important tool in career progression of trainees, but implementation has been hindered by being overly complex and burdensome. The new pediatric endocrine Milestones 2.0 and the supplemental guide are intended to make the milestones more applicable to our field, easier to utilize, focused on individual growth, and more attentive to important issues of health equity and population health. Further research and feedback on the Milestones 2.0 after implementation will determine whether these goals were met. While the Milestones are required only in fellowships accredited by the ACGME, their general principles are applicable to trainees worldwide and can be another tool in the evaluation of a fellow’s progress through their career. Additional file 1. Additional file 2. |
Pharmacogenetics as part of recommended precision medicine for tuberculosis treatment in African populations: Could it be a reality? | 51df6c36-1f59-4544-b7ff-b3954d2602c3 | 10339705 | Pharmacology[mh] | The field of pharmacogenomics (PGx) could be of relevance to standard first‐line tuberculosis (TB) treatment, particularly in resource‐limited African settings, where the high incidence of TB poses a significant burden on healthcare systems and the economy. The narrow therapeutic index of TB drugs leads to interindividual variation in treatment outcomes, including severe adverse drug reactions (ADRs), treatment failure, and drug resistance. TB is one of the 10 leading causes of death worldwide, ranking as the most lethal disease by a single bacterial agent. The World Health Organization (WHO) has ambitiously set its global goal to reduce the number of TB cases by 80% in 2030. However, the coronavirus 19 disease (COVID‐19) pandemic has in the past few years overburdened the healthcare system and has substantially impacted the economy. This has resulted in an increase in the number of patients not seeking treatment thereby contributing to an increase in TB infections, transmissions, and TB‐related deaths in Africa (WHO in 2021). Of great concern to public health is the emerging multidrug‐resistant (MDR) TB, and extensively drug‐resistant (XDR) TB. , , The WHO Global TB Report for 2019, prior to the COVID‐19 pandemic, estimated the number of patients with MDR‐TB at more than 200,000, globally. Although there are limited data available for the frequency of MDR and XDR TB infections in Africa, the few reports suggest that South Africa and Nigeria suffer from the highest incidence on the continent. Mortality rates in these countries are estimated at 21% and 43% for MDR and XDR patients respectively. Although new drugs have become available in recent years to treat patients with MDR and XDR TB, there are only a few options. Standard first‐line treatments have for the past decades relied on the potent drug isoniazid (INH), and rifampicin (RIF), in combination with pyrazinamide (PZA) and ethambutol (ETB), and with a limited number of tolerable drugs available, these drugs need to be used to their maximum utility. The context in Africa is unique and diverse, differentiated by thousands of languages and cultures, unique geographic environments, and expansive genetic diversity. In addition, various economic factors, such as insufficient access to healthcare, potentially affect the occurrence of drug treatment failure, and the high disease burden in Africa. Sub‐Saharan Africa accounts for around 28% of TB cases, 71% of HIV cases, and 88% of the global malaria cases (WHO). Because non‐communicable diseases, such as cancer, diabetes, and cardiovascular disease, are equally increasing in sub‐Saharan Africa, this gives rise to population groups with specific treatment requirements. Prolonged and life‐long polypharmacy related to HIV/TB treatments contribute to the high frequency of ADRs in the African continent with up to 6–8% of hospital admissions attributed to ADRs. Furthermore, specific population subgroups, such as young children, the elderly, pregnant mothers, diabetics, and patients who are HIV‐positive, present with different drug exposure profiles and require adjusted dosages and treatment plans. , , These patient groups should be prioritized for pre‐emptive PGx testing to improve the efficacy and tolerability of TB treatments. It is recognized that PGx studies underlie a “eurocentric bias,” and that extrapolating findings with regard to drug safety and efficacy from genomic research conducted in mostly Eurasian populations to African populations may not be feasible. As PGx is gaining momentum, this knowledge gap could aggravate already existing social and ethical inequalities. , On the other hand, the study of African genomes, which are extraordinarily diverse, promises insights into disease susceptibility and drug disposition that could benefit other populations worldwide. , Variation in important PGx genes is vastly different across African populations, with unique and often rare alleles occurring in specific population clusters and at different frequencies particularly within high impact coding regions. More research is required to identify and characterize PGx variants in African populations to potentially develop a pre‐emptive but relevant TB PGx test. The benefits of implementing PGx into clinical care are tangible in the developed world, yet PGx research in Africa and the implementation thereof is greatly lacking and requires immediate attention. , PGX FOR TB DRUGS – NAT2 AS A CASE STUDY Imperative for the development of a PGx test is evidence of its clinical utility, which is “the likelihood that using the test result will lead to improved care and health outcome.” The PharmGKB database, in collaboration with the Clinical Pharmacogenetics Implementation Consortium (CPIC), is the currently most recognized and comprehensive PGx resource, its contributors undertaking the complex task of processing the wealth of growing PGx research results into standardized “levels of evidence” as a measure for clinical utility. Important African initiatives include the African Pharmacogenetics Consortium, the Plasmodium Diversity Network in Africa, and Human Heredity and Health in Africa (H3A). These collaborations are aiming to characterize and integrate human genetic variation across the African continent into public databases, forming the foundation for establishing robust PGx guidelines and paving the way for personalized medicine technologies in Africa. No actionable PGx information is currently available for RIF, PZA, and ETB on the PharmGKB website. INH is assigned a “level C” by the curators, implying that there is not adequate evidence or actionability to have prescribing recommendations (“high level of evidence supporting the association but no variant‐specific prescribing guidance in an annotated clinical guideline or FDA drug label”). The US Food and Drug Administration (FDA)‐approved PGx label acknowledges the effect of genetic variants on drug tolerability. It reads that “the rate of acetylation does not significantly alter the effectiveness of isoniazid. However, slow acetylation may lead to higher blood levels of the drug, and thus, an increase in toxic reactions.” Acetylation of INH is carried out by the enzyme arylamine N ‐acetyltransferase ( NAT2 ), which is responsible for 88% of INH metabolism. The large effect size of this enzyme, which is of importance in several steps of INH metabolism, suggests that NAT2 could be a feasible candidate for an INH‐based PGx test. In most populations studied to date, the most common variants are highly predictive of composite metabolizer phenotype, , defined as follows: slow acetylators carrying two nonfunctional alleles (including NAT2*5 , NAT2*6 , NAT2*7 , NAT2*14 , NAT2*17 , and NAT2*19 ), intermediate acetylators harboring only one nonfunctional allele and a functional or the wild‐type allele ( NAT2*4 ), and the fast acetylator phenotype, including patients that carry two functional or wild‐type alleles. The efficacy and efficiency of TB treatment response is drug concentration dependent, with increased INH exposure (area under the curve from zero to 24 h [AUC 0–24 h ]) in slow acetylators being associated with ADRs, and subtherapeutic maximum drug exposure ( C max ) in fast acetylators leading to treatment failure. , More importantly, subtherapeutic drug exposure plays a role in the development of drug resistance and, according to some studies, Africans exhibit subtherapeutic drug levels of TB drugs on standard dosages. , , , Therefore, prediction of INH drug exposure by NAT2 genotype and regulation of dosages for individual patients could prove a crucial advantage in efforts to optimizing treatments. Many other drugs which are metabolized by NAT2 have been identified as “actionable” by the CPIC, including amifampridine, amifampridine phosphate, sulfamethoxazole, and sulfasalazine, motivating that NAT2 could be an overall eligible candidate gene for PGx testing. According to much research, NAT2 genotype‐adjusted dosing of INH is recommended , , , , , , but clinical studies are still few. Recent studies exploring the use of a point‐of‐care (POC) INH PGx test have shown the feasibility of using NAT2 acetylator phenotypes to predict INH clearance rates and bioavailability. , Nonetheless, the detection of a measurable effect of genotype‐adjusted dosages on treatment outcome in real‐world settings will require large sample numbers and replication in various populations. Environmental and patient‐specific factors, such as weight, sex, age, concomitant treatment with other drugs, and diet have all shown to influence INH exposure and will need to be considered. , , In addition, it is likely that the inclusion of other important TB pharmacogenes, as discussed in the following sections, will improve the utility of a TB PGx test. NAT2 GENE AND IMPLICATIONS FOR PGX TESTING IN AFRICA Variation in NAT2 varies substantially between populations, with the six most common alleles NAT2*5 (341T>C, rs1801280), NAT2*6 (590G>A, rs1799930), NAT2*7 (857G>A, rs799931), NAT2*12 (803A>G, rs1208), NAT2*13 (282C>T, rs1041983), NAT2*14 (191G>A, rs1801279) exhibiting very different frequencies across populations in Africa (Figure ). , , It is assumed that the frequency of other causal variation in NAT2 is very low in populations studied to date, but studies in African populations demonstrate a high diversity at this locus, with very different and shorter haplotype structures and novel variation occurring at frequencies which are non‐negligible. Interestingly, hunter‐gatherer tribes, such as the Western Pygmy and Kung San, are predominantly fast and intermediate acetylators, which is in stark contrast to the agriculturalist populations who more commonly present with a slow acetylator phenotype. , Fast acetylator types are even completely absent in the Hadza (Tanzania), Mada, and Fulani (Cameroon). Of note, the differences in allele frequencies to other major populations, such as the Japanese, German, or Brazilian, are even more striking (Figure ). The NAT2*4 allele occurs very frequently in Japanese patients, whereas the NAT2*5 is almost absent. It should be noted that rare allele frequencies shown in Figure could be more frequent and thus of greater significance in other populations not explored to date. These population‐specific differences to other, well‐described major population groups and within Africa have implications for PGx and warrant further investigation by resequencing to ensure inclusion into the PGx knowledge base. A recently developed POC test has shown that including five single‐nucleotide polymorphisms (SNPs) only, namely NAT2*14 or 191G>A (rs1801279), NAT2*13 or 282C>T (rs1041983), NAT2*5 or 341C>T (rs1801280), NAT2*6 or 590G>A (rs1799930), and NAT2*7 or 857G>A (rs799931), was sufficient to accurately predict slow, intermediate, and fast acetylators, in 8561 patients representing 59 populations with 100% accuracy. Interestingly, including the SNP 191G>A, made a significant difference in the utility of the model. This variant is almost exclusive to African populations. The SNP 191G>A, which confers a slow acetylator phenotype, was not included in a study which found no association between NAT2 genotype and INH pharmacokinetics in African Zulu patients, testifying to the importance of including this SNP particularly in PGx research in African populations. In Africans, the frequencies of this variant range from 7.1 to 11.6%. The nonsynonymous variants NAT2*22 or 609G>T (rs45618543) and NAT2*24 or 403C>G (rs12720065) have also been detected predominantly in Africans (Figure ), but have not been assigned a functional affect, and could lead to an underestimation of slow acetylator phenotypes in these population groups. It is estimated that 50% of the world's population is either a fast or slow acetylator, thus falling into the patient groups which could benefit from either INH dose increase or reduction, respectively. , Notably, ADRs and treatment failure instances correlate with the population‐specific frequency of slow and fast acetylator genotypes. This finding has large implications for meriting implementation of TB PGx in populations worldwide. Figure shows the large differences in predicted acetylator phenotypes in African and other major population groups. For some African populations, such as the Pygmy, the prediction of acetylator phenotype was not possible for genotypes shown in yellow, as the effect of alleles occurring exclusively in these populations is yet unknown (such as NAT2*22 and NAT2*23 ). Evidently, much more research is required to elucidate functional effects of rare alleles in African populations before a PGx effect may be predicted. PHARMACOKINETICS AND TREATMENT OUTCOME OF OTHER FIRST‐LINE DRUGS Together with INH, RIF forms the backbone of active and latent TB treatment. Considerable variation in RIF exposure is observed and explained by multiple genes, but no candidate gene has yet been identified for therapeutic drug monitoring of RIF. A recent meta‐analysis by Sileshi et al. identified polymorphisms rs4149032 (g.38664C>T), rs2306283 (c.388A>G), and rs11045819 (c.463C>A) in the organic anion transporting polypeptide 1B1 (OATP1B1) gene ( SLCO1B1 ) to be of importance. These variants have repeatedly been associated with RIF plasma concentrations and susceptibility to anti‐TB drug‐induced liver injury (ATDILI), including in African populations. , Association studies with RIF pharmacokinetics and anion transporting polypeptides P‐glycoprotein ( ABCB1 or MDR1 gene), pregnane X receptor, and constitutive androstane receptor ( CAR ), metabolizing enzymes cholinesterase enzyme 1 ( CES1 ), and arylacetamide deacetylase ( AADAC ), are still inconclusive, with reports often describing associations only in specific populations. , , For example, only one study described an effect of ABCB1 3435C>T on RIF plasma concentration in Mexican patients and only one study found an association of increased RIF plasma concentrations and CES2 c.‐22263A>G (g.738A>G) in Korean patients. The AADAC polymorphism rs1803155 AA variant resulted in lower RIF clearance in two independent studies in South African populations. , Scant PGx information is available for the influence of genetic variation on drug disposition of PZA and ETB. Only one PGx study shows that cytochrome P450 enzyme 1A2 ( CYP1A2 ) SNP 2159G>A is associated with a 50% reduction in ETB bioavailability in a group of HIV‐co‐infected Rwandan patients, and simulated estimates predict that increasing dosages in carriers would lead to clinically more adequate exposure. Recent studies suggest PZA may contribute to hepatoxicity, and patients with impaired liver or renal function might benefit from lower dosages. Xanthine oxidase plays a role in the various pathways by which PZA is metabolized, but the identity of the gene coding for PZA amidase is still unknown. The knowledge gap concerning PZA and ETB is considerable and should be addressed. If a TB PGx test is to be effectively used to avoid ADRs, the effect of PZA on hepatoxicity, apart from the ones of INH and RIF, needs to be accounted for. HIV AND TB TREATMENT Drug–drug‐gene interactions contribute to ADRs and differential treatment outcomes, but their importance in altering drug disposition is often under‐estimated. Without prior PGx knowledge, induction, inhibition, and even pheno‐conversion effects can result in highly unpredictable drug exposures. RIF commonly acts as an inducer on enzymes, whereas INH has an inhibitory action, and both are known to interact with a wide variety of common drugs, such as anticoagulants, immunosuppressants, contraceptives, glucocorticoids, anticonvulsants, and paracetamol, among others. Given the high incidence of TB‐HIV co‐infection in Sub‐Saharan Africa, and the complexities surrounding the treatment particularly of immune‐compromised patients, better understanding of the drug–drug‐gene interactions could be valuable. Compared to patients receiving TB treatment only, TB‐HIV co‐infected patients have significantly lower INH bioavailability. Of note, efavirenz (EFV)‐based antiretroviral therapy is the advocated choice with TB treatment, but EFV plasma levels vary significantly in combination with RIF, which acts as a strong inducer of the hepatic cytochrome P450 enzyme system. Pregnant patients receiving concomitant HIV and TB regimens had 26% and 15% higher clearance rates of INH and EFV, and thus even lower bioavailability of these drugs. The same study by Gausi et al. revealed that both NAT2 and CYP2B6 genotypes were the strongest predictors of drug bioavailability. Interestingly, two African populations had higher EFV levels than controls not receiving RIF, particularly in carriers of the slow metabolizer allele CYP2B6*6 , which was the exact opposite result in similar studies involving White populations. RIF is known to induce CYP2B6, the main metabolizing enzyme of EFV, and thus expected to lead to a reduction in EFV levels. Mugusi et al. hypothesize that this effect may be explained by the inhibitory action of INH on CYP2A6, which becomes more important in EFV metabolism in CYP2B6*6 slow metabolizers. This study reflects the complexity involved in drug–drug and gene–drug or gene–gene interactions which would need to be considered, and that PGx effect might differ significantly between populations. A combined TB and HIV PGx test, including CYP2B6 metabolizer phenotypes in patients receiving concomitant EVF‐based HIV and TB treatment, may increase the utility of such a test for comprehensive dose optimization of both regimens. ADRS AND TREATMENT FAILURE THROUGH PGX ‐DIRECTED DOSING ATDILI is markedly the most critical and common ADR, occurring in 2–28% of patients with TB. INH, RIF, and PZA are implicated to contribute to hepatoxicity, but no effect of ETB has been described to date. ATDILI treatment prognosis in South Africa is poor, with patients not tolerating first‐line regimens, frequent hospitalizations, treatment interruptions, and frequently contributing to non‐adherence, resulting in drug resistance and high mortality rates. ATDILI is a complex, polygenic phenotype with environmental, patient‐related factors and a combination of genes linked to affecting risk. The risk for ATDILI is higher in developing countries, due to malnutrition, advanced TB, or incorrect drug usage. The mechanisms underlying ATDILI are not fully understood, but the accumulation of toxic metabolites is thought to play a major role, and thus, poor metabolizer phenotypes are at an increased risk. The NAT2 slow acetylator phenotype is a strong biomarker for INH‐induced ATDILI. , , To a lesser extent, polymorphisms in cytochrome P240 E1 ( CYP2E1 ) , and Glutathione‐S‐transferases ( GSTMI and GSTTI ) , have also been implicated in influencing INH metabolism and thus ADR development. Variation in these genes could play an additive role in ATDILI if clearance of toxic downstream metabolites is impaired. Some studies have reported higher incidence of ATDILI in Indian carriers of the CYP2E1*6 allele and Taiwanese carriers of CYP2E1*1A/*1A genotypes in combination with slow NAT2 acetylator genotypes. An increased risk of INH‐induced ATDILI with UDP‐glucuronosyltransferase UGT1A1*27 and UGT1A1*28 genotypes in Chinese patients has also been reported. More information is required to determine the involvement of other genetic factors in ATDILI, but their effect size will determine if their inclusion in a PGx test is worthwhile. Interestingly, a recent genomewide association study in a Thai population revealed that a tagging SNP (rs1495741) was significantly associated with ATDILI risk. Although it is in an intronic region of the NAT2 gene, and its effect on protein functionality is unknown, it may be indicative of the effect of linkage disequilibrium (LD) with known functional SNPs. This tagging SNP, of which the A allele commonly segregates with slow acetylator haplotypes NAT2*5B , NAT2*6A , and NAT2*7B , and the G allele often occurring with the NAT2*4 haplotype, could thus be useful in a PGx test in Thai populations. The researchers found that assigning acetylator phenotypes according to rs1495741 genotype as slow acetylator (AA), intermediate acetylator (AG), and fast acetylator (GG), conventional acetylator phenotypes were predicted with a concordance rate of 94.98%. However, this SNP did not suffice to predict acetylator phenotype in African populations. Tag SNPs may provide for a more cost‐effective alternative, but the significantly shorter LD patterns in Africans impair the utility of tag SNPs to replace genotyping (many) functional SNPs to predict phenotypes. To date, no tag SNP has been identified to accurately predict acetylator phenotype or TB treatment outcome in Africans. Although clinical studies are few, and sample numbers are small, there are randomized, clinical control study designs available that demonstrate the benefit of NAT2 genotype‐directed dosing to improve ADRs in drug‐susceptible patients. , , , A clinical trial by Yoo et al. showed that lowering dosages in slow acetylators resulted in significantly less ADRs in the test group, without risking subtherapeutic drug levels. Furthermore, application of a dosage algorithm taking into account acetylator status and body weight, achieved therapeutic INH concentrations across patient groups. Whereas avoiding ATDILI by reducing dosages in slow acetylators seems feasible, there is insufficient clinical evidence that increasing dosages in fast acetylators will improve treatment success rates. Although one study found that the sputum failure conversion risk was two‐fold higher in a group of Indonesian fast acetylators compared to slow acetylators, the difference was not significant. One important clinical trial conducted in Japanese patients validates that NAT2 ‐genotype directed INH dose adjustments in slow and fast acetylators significantly improves treatment outcome with regard to reduced ATDILI and treatment failure, respectively. Patients received either standard treatments or an adjusted regimen where slow acetylators received 2.5 mg/kg INH and fast acetylators received 7.5 mg/kg. Intermediate acetylators received the standard dosage of 5.5 mg/kg. None of the slow acetylators receiving dose adjustments developed ATDILI, whereas 78% of slow acetylator patients had ATDILI in the control group receiving standard dosages. The patient group receiving adjusted dosages had a significantly lower risk of treatment failure in fast acetylators, and a lower risk of early treatment failure at 8 weeks. Most importantly, increasing the dosage in fast acetylators did not predispose these patients to ADRs, nor did slow acetylators receiving a lower dosage exhibit greater chances of treatment failure, testifying to the safety and tolerability of this optimized regimen. Cure rates of patients with MDR TB in a South African population group are less than 50%. Second‐line drugs, currently used to treat patients with MDR and XDR TB, are less effective, less tolerated, not always available, and fall short of the unparalleled, dose‐dependent early bactericidal activity of INH and strong sterilizing capacity of RIF. At increased dosages, RIF and INH can benefit TB treatment outcomes, particularly in patients with MDR, and could shorten treatment duration. , , , Thus far, dosage increases (up to 10–15 mg/kg) are not regularly implemented because of justified concerns of toxicity and ADRs. However, safely increasing dosage to such levels can be achieved with knowledge of the influential NAT2 genotype of the patient. PGx could thus play a key role in identifying patients with MDR or XDR TB who could still be eligible for first‐line treatments, at safely increased dosages. As such, PGx could facilitate the optimal use of the limited types of drugs and resources that are available in developing countries. TB, AND PGX IN AFRICAN POPULATIONS PGx has the potential to exacerbate, rather than reduce, already existing socio‐economic health disparities. Unfortunately, the underprivileged populations in Africa are affected most by TB and under‐represented in PGx research. Currently, there are 168 PGx testing panels available, of which the African genomic data is represented only by about 2% of the available data. Extrapolating PGx findings from White populations to African populations, as exemplified by the example of the highly variable NAT2 gene, can produce poor results. Even across Southern and Western African population groups, selective pressure, and structural genetic differences in the NAT2 locus are evident, such that these population groups may have different drug response profiles. For a PGx test to be widely valid and thus more economical in its large‐scale application down the line, it is important that PGx testing panels include variation representing all the populations that could benefit. More importantly, increased admixture in global populations warrants an approach where the inherent genetic individuality of patients is recognized, instead of using ethnicity as a proxy. The frequency of a PGx marker is an important consideration for the design of a cost‐effective PGx test. Most recommendations for PGx testing foresee the inclusion of only common variation, but the inclusion of rare genetic variation, as is a signature characteristic of African genomes, significantly improve the predictability of phenotypes. , In fact, African genomes have three times more rare variants than Europeans and Asians. Thus, rare ADRs and interethnic differences in drug response may be the result of rare variation. Genes involved in the absorption, distribution, metabolism, and excretion are highly variable, with important variants found in European lines also often being rare in African patients, reflecting a European research bias and rendering array‐based genotyping technologies incompatible for the use of patients of African ancestry. It is suggested that with genotyping arrays better suited to African populations, frequencies of variants which are “rare” might be identified more often. Moreover, 30–40% of functional variability in PGx genes are accounted for by rare variants, without known functional effect. Fortunately, bioinformatic tools are increasingly becoming available, facilitating the assessment of functional effects of rare variants in a cost‐effective manner. PGx studies in African ancestries should rely on increasingly better and affordable sequencing techniques to discover novel and rare variants, and on clustering together certain subpopulations to render precision medicine more economically feasible where common clinically important variants are sparse. Cost‐effectiveness, accessibility, and overall economic benefit to patients and healthcare systems will determine the feasibility of implementing a TB PGx test. Pereira et al. recommend governments should refrain from implementing a service which does not benefit at least 10% of its population, and that stratifying patients through affordable diagnostic assays is necessary to provide targeted treatments. Patients receiving concomitant HIV and TB treatment, the elderly, very young children, pregnant women, and patients with severe disease could benefit most from optimized treatments. Furthermore, a TB PGx test should be performed prior to treatment commencement , saving money on ineffective treatment outcomes, and time, as the PGx information is immediately available. However, a pre‐emptive test will rely on the availability of an electronic health record system being in place, which will require a large investment into healthcare in African countries. , Adequate funding has shown that much can be achieved, particularly in the field of genomic medicine. It is of utmost importance that governments allocate funding not only toward TB treatment, but also toward innovative research and development to facilitate similar rapid scientific achievements, as for the COVID pandemic. Globally, COVID‐19 is a crucial health concern, however, in sub‐Saharan Africa, TB remains a far greater disease burden than COVID‐19, and, if overlooked, could lead to devastating long‐term setbacks for health care in Africa, necessitating resource allocation for TB rather than COVID. “Realizing the promise of pharmacogenomics to benefit society may have as much or more to do with successful breakthroughs in the more mundane arena of demonstrating its value to those who will fund it than innovative scientific discovery.” TB PGX TEST The golden standard for estimating the efficiency, utility, and cost‐effectiveness of implementing PGx into clinical care is by randomized, controlled clinical trials. However, intelligent analytical modeling may provide a cheap and comprehensive solution to this research question. One recent study has explored the economic feasibility of implementing a POC NAT2 PGx test in three different country settings by making use of an analytical modeling system. The automated prototype POC test developed by Verma et al. on the GeneExpert platform, requires a minimal 25 μL of blood, and has genotyping results available within 2.5 h. The same research group has modeled clinical impact and cost‐effectiveness of this test in Brazil, India, and South Africa, taking into consideration country‐specific costs, estimated risks of toxicity, 2‐month culture positivity, and treatment failures. Costs and health outcomes are quantified in quality‐adjusted life years (QALY) and incremental cost effectiveness ratios (ICER). The clinical impact and cost‐effectiveness of the implementation of an NAT2 PGx test in South Africa is predicted at costing an additional US $3182, gaining 19 QALY per 1000 patients with TB. This result is cost‐effective, the ICER of US $1780 per QALY gained being less than the South African per capita GDP (US $6340). This research provides convincing evidence that despite TB PGx testing not being regarded as actionable or in wide clinical use, its implementation into African settings could not only be feasible but also benefit individual patients as well as optimize resources in over‐burdened healthcare systems. Thus, the available arsenal in the ongoing fight against TB could be transformed. , , Pereira et al. identified three important requirements for successful implementation of PGx in Africa: creating an electronic health system, building of molecular biology skills, and standardization and sharing of data across Africa. Finally, these requirements need to be linked so that data become available to clinicians in a comprehensive format. Furthermore, the design of PGx testing panels including variation specific for African populations is crucial. There is much evidence available for the relationship between NAT2 genotype and INH‐based TB treatment outcomes, but implementation into clinical practice will require robust and comprehensible guidelines. For a TB PGx test, the most common NAT2 variants may suffice to predict INH exposure in most populations. Although implementation of a POC pre‐emptive NAT2 ‐based PGx test is predicted to be cost saving and efficient in improving health outcomes, randomized clinical control studies examining the benefit of genotype‐directed dosing are still needed. Insufficient PGx information is yet available for RIF, PZA, and ETB. Given the high incidence of HIV‐TB co‐infected patients, combining PGx information relevant for both regimens in one PGx test may increase its value. The development of a dosing algorithm taking all these factors into consideration will likely form the foundation of the solution, but considerable funding and more research are required to elucidate and implement the genetic, patient‐specific, and environmental factors surrounding TB drug disposition and its clinical effects into a TB PGx test in Africa. Research reported in this publication was supported by the Grants Innovation and Product Development unit of the South African Medical Research Council with funds received from Novartis and GSK R&D for Project Africa GRADIENT (Grant # GSKNVS1/202101/001 and Grant # GSKNVS2/202101/003). This research was partially funded by the South African government through the South African Medical Research Council and the National Research Foundation. The content is solely the responsibility of the authors and does not necessarily represent the official views of the South African Medical Research Council or the National Research Foundation. The authors declared no competing interests for this work. Figure S1 Click here for additional data file. Figure S2 Click here for additional data file. |
Phosphoproteomics for studying signaling pathways evoked by hormones of the renin‐angiotensin system: A source of untapped potential | e7abf898-eccf-48e3-a3c3-619108aafd0b | 11737475 | Biochemistry[mh] | INTRODUCTION The Renin‐Angiotensin System (RAS) is a complex neuroendocrine system composed of the protein angiotensinogen (AGT), peptide hormones derived from AGT after limited proteolysis, and several receptors (Figure ). RAS components are found in the vast majority of tissues, controlling a large variety of processes including arterial blood pressure and extracellular fluid volume, learning/memory, metabolism, inflammation, fibrosis, reproduction, cell proliferation etc. Disturbances in the RAS are involved in several diseases such as hypertension and related organ damage, kidney disease, cancer, fibrotic disease, ischemic brain damage, among others. , , Understanding the function of the RAS is, therefore, paramount for preventing and treating RAS‐associated disorders. Knowledge of signaling mechanisms elicited by RAS effectors is essential for a deeper understanding of the molecular mechanisms underlying RAS functions. RAS‐related signaling mechanisms have been investigated by classical methods (e.g., Western blotting) for many decades and created a solid foundation of knowledge. However, antibody‐based methods have limitations such as availability of commercial antibodies with high specificity and sufficient sensitivity. Another limitation is the slow throughput due to the “one protein at a time” approach. Therefore, the investigation of changes in abundance or phosphorylation of proteins within signaling cascades by antibody‐based methods is limited to a quite restricted number of target proteins. Only recently, mass spectrometry (MS)‐based “antibody‐free” approaches have been added to the armamentarium for studying the RAS. Generally, MS‐based techniques have the advantage of very high sensitivity and of the possibility to determine changes in abundance of thousands of proteins at the same time. Importantly, MS‐based techniques are also suitable for measuring agonist‐induced post‐translational modifications (PTMs) such as changes in protein phosphorylation, methylation or glycosylation within the entire cell/tissue proteome. Since PTMs, in particular phosphorylations, are often responsible for changing the activation status of a protein, particularly enzymes, information on PTMs and the respective bioinformatical analysis of such data allows inferences on the activation/inhibition of signaling cascades or other relevant biological processes. This is an important advantage over studies on protein abundance or mRNA expression only, since data on expression do not allow conclusions on protein activity. This article reviews existing studies which applied MS‐based techniques for studying RAS signaling. It focuses on studies applying phosphoproteomics as this technique allows monitoring protein phosphorylation/dephosphorylation events associated with signal transduction. In addition, our article provides an overview over signaling pathways that are shared by different receptors of the protective arm of the RAS as identified by phosphoproteomics. Finally, we discuss knowledge gaps which could be addressed in the future using MS‐based approaches. RAS LIGANDS, ENZYMES, AND RECEPTORS The discovery of the RAS began in 1898, when Tigerstedt and his assistant Bergman working at the Karolinska Institute in Sweden reported that a protein (renin) extracted from rabbit kidney induced pressor effects when injected into another rabbit. Forty years later, two independent research groups identified the octapeptide angiotensin (Ang) II (H‐DRVYIHPF‐OH) to be the active hormone responsible for this pressor effect (refer to for an Ang II historical review). Ang II is produced from AGT in a two‐step enzymatic process involving renin and angiotensin‐converting enzyme (ACE) (Figure ). In the 1970s, Ang III (H‐RVYIHPF‐OH) was identified as a product of the enzymatic removal of the aspartate residue from the N‐terminal of Ang II. , During the 1970–80s, studies involving Ang II analogues such as Sar 1 ‐Ala 8 ‐Ang II, Sar 1 ‐Cys(Me) 8 ‐Ang II or Ang III revealed considerable variability in the responses elicited by these agonists across different tissues indicating the involvement of two or more receptors in mediating the responses of RAS effectors. , This assumption was finally proven in 1989, when ligands specific for certain receptor subtypes became available such as the non‐peptide compounds DuP 753/Ex89 (losartan, AT 1 antagonist), PD123319 (AT 2 antagonist), and the Ang II‐peptide analogue CGP42112A (AT 2 agonist). Using these new tools, two independent research groups observed differential displacement of Ang II by these compounds in various tissue preparations, which led to the identification of two distinct receptor subtypes termed the AT 1 receptor (AT 1 R) and the AT 2 receptor (AT 2 R). , Existence of these two receptor subtypes was finally proven in the early 1990s with the cloning of the respective cDNA sequences. , The first reports on Ang IV (H‐VYIHPF‐OH) were published in the 1960–70s and were based on structure‐to‐function studies using Ang II N‐terminal fragments. At that time, however, Ang IV was deemed to be biologically inactive. Only from the 1980s, biological effects associated with Ang IV were unveiled, mainly showing modulation of animal behavior such as improvement of learning and memory recall. Ang IV exerts some of its effects by low‐affinity binding to the AT 1 R and the AT 2 R. However, the main endogenous target for Ang IV is the insulin‐regulated aminopeptidase (IRAP), also referred to as AT 4 R, as only identified in 2001. IRAP has enzymatic activity which is inhibited by Ang IV upon binding. Ang IV can be formed directly from Ang II by dipeptidyl aminopeptidases (DAP) or as an end‐product of Ang II N‐terminal processing by aminopeptidases (AP) with Ang III as an intermediate of this process (Figure ). Studies from the late 1980s reported biological effects of Ang‐(1–7) (H‐DRVYIHP‐OH), , a peptide previously considered as an inactive product of Ang II degradation. However, only in 2003 the receptor Mas (MasR) was identified as the receptor for Ang‐(1–7). In the second half of the 2000s and the first half of the 2010s, two more RAS peptides were discovered: Ang A (H‐ARVYIHPF‐OH) acting via the AT 1 R to elicit similar effects as Ang II / AT 1 R, and Alamandine (H‐ARVYIHP‐OH) and its receptor MrgD. Alamandine and Ang‐(1–7) are both 7‐mer peptides differing only at position 1; Ala 1 in Alamandine versus Asp 1 in Ang‐(1–7) (Figure ). It is believed that an enzyme with a carboxylase activity is responsible for producing Alamandine by removing a CO 2 group from the side chain of Asp 1 to produce Ala 1 , thus transforming Ang‐(1–7) into Alamandine (Figure ). However, to this date, such enzyme is yet to be identified. Figure represents a most up‐to‐date view of the RAS including its two functional arms: the classical (canonical) axis and the protective (non‐canonical) axis. , The main receptor of the classical axis is the AT 1 R, whereas the main receptors of the protective axis include the AT 2 R, MasR, and MrgD (Figure ). Most recently, Ang‐(1–5), a degradation product of Ang‐(1–7), was shown to be another biologically active hormone of the RAS. , , Thorough characterization of the peptide revealed that it is an endogenous AT 2 R agonist, which elicits effects typical for AT 2 R activation such as nitric oxide (NO) synthesis via protein kinase B (Akt)/endothelial nitric oxide synthase (eNOS) signaling, relaxation of mouse and human resistance arteries and lowering of blood pressure in male and female normotensive mice. Another recent addition to the RAS peptide family was the endogenous peptide Alamandine‐(1–5) [Ala‐(1–5)] (H‐ARVYI‐OH). Ala‐(1–5) seems to signal through the protective RAS receptors: MasR, MrgD and AT 2 R. However, only some effects of Ala‐(1–5) are typical for MasR, MrgD or AT 2 R mediated actions (e.g., increased NO production and reduction of blood pressure in normotensive (Wistar) and hypertensive (SHR) rats), whereas others are not (e.g., constriction of mouse aortic rings and reduced contractility of cardiac myocytes). The unconventional effects elicited by Ala‐(1–5) suggest that it potentially binds to different receptor sites and/or elicits G‐protein‐independent signaling pathways. Effects evoked by the two RAS arms are usually counter‐regulatory. For example, while the activation of the classical axis leads to vasoconstriction, inflammation, fibrosis, and proliferation, activation of the protective axis leads to vasodilation, anti‐inflammatory, anti‐fibrotic, and antiproliferative effects (Figure ). PHOSPHOPROTEOMICS FOR THE STUDY OF CELL SIGNALING WITHIN THE RAS AND BEYOND Proteomics encompasses the investigation of a specific proteome, which is defined as a set of proteins being synthesized or degraded within a particular cell or tissue within a specific time. The development of proteomics as we know it today took place in the 1990s, but its rapid advancement accelerated from the 2000s onwards. This progress was primarily propelled by the introduction of novel sample preparation techniques, , , , , by more sophisticated mass spectrometers, , , , , , , and by the development of new bioinformatic tools. , , , , , , , , From the early days of proteomics, it was evident that novel strategies were required to extend the application of this technique to the study of protein phosphorylation, a PTM that typically occurs at low abundance and therefore cannot be identified through conventional proteomics approaches. To overcome this issue, phosphopeptide enrichment techniques were developed, enabling the identification, localization and quantification of phosphorylation sites. Therefore, in contrast to proteomics, which serves to quantify protein abundances (proteome quantification), phosphoproteomics quantifies protein phosphorylation levels (phosphoproteome quantification) thus allowing conclusions on the activation level of certain proteins such as kinases/phosphatases within signaling cascades. When applying phosphoproteomics it is important to run proteomics as well on the same samples so that phosphorylation levels can be normalized to protein abundances. Proteomics and phosphoproteomics can be applied in two distinct manners: the targeted and the untargeted approaches. Untargeted proteomics and phosphoproteomics are a hypothesis‐generating approach and do not require the pre‐definition of certain proteins of interest. Instead, it maps the global proteome or phosphoproteome of a cell or tissue for changes in protein expression or phosphorylation in response to a certain intervention, thereby potentially identifying so far unknown biological processes. In contrast, targeted proteomics/phosphoproteomics is a hypothesis‐driven approach that quantifies pre‐defined proteins and phosphoproteins (targets) to be assessed in a similar way as the antibody‐based methods (i.e. Western blotting) but without the need for antibodies and without the restrictions regarding the number of investigated proteins per experiment. While the untargeted approach is typically favored during the discovery phase of a research project, the targeted approach can be employed to validate findings obtained during the discovery phase. Figure illustrates a typical workflow for investigating a specific proteome and phosphoproteome within the context of cellular signaling. Since changes in the phosphorylation status of certain proteins at specific residues is a most common feature of cell signaling cascades, detection of such events (phosphorylation / dephosphorylation) by MS‐based phosphoproteomics represents a potent tool for unbiased exploration of cell signaling pathways. Nevertheless, so far only a few studies have investigated signaling mechanisms within the RAS by phosphoproteomics meaning that the power of this technique has not yet been fully taken advantage of in RAS research. Since 2010, the year of the first two studies on RAS signaling using phosphoproteomics, , only 22 articles have been published which in some way or the other had to do with signaling and RAS components. This contrasts sharply with the more than 17000 studies on RAS signaling since 2010 using other techniques or the more than 3500 publications that have employed phosphoproteomics to study cellular signaling networks unrelated to RAS in the same timeframe (PubMed searches made in November 2024 using the following search terms: “angiotensin AND signalling”; “signalling AND phosphoproteome”; “angiotensin AND signalling AND phosphoproteome”). Therefore, in the following sections we will highlight the power of phosphoproteomics for the investigation of RAS‐related intracellular signaling, aiming to spark the interest in phosphoproteomics by providing a critical assessment of the utilization of this technology and by reviewing those studies which have applied phosphoproteomics in RAS research so far. Table summarizes the key publications discussed in this review, which helped defined what is now known related to RAS signaling. THE AT 1 RECEPTOR The AT 1 R is a classical class A G‐protein‐coupled receptor (GPCR) which signals through G q and G 11/12 pathways and through β‐arrestin. AT 1 R signaling mechanisms have been well characterized by conventional methods and include activation of phospholipase C, IP 3 ‐triggered calcium release, protein kinase C mediated cell proliferation and smooth muscle contraction, as well as activation of the Rho kinase, MAPK/ERK (mitogen‐activated protein kinases/extracellular signal‐regulated kinases), JAK/STAT (tyrosine‐protein kinases JAK/signal transducer and activator of transcription), NF‐κB (nuclear factor kappa‐light‐chain‐enhancer in B cells), TGF‐β (transforming growth factor‐beta), Src family (proto‐oncogene tyrosine‐protein kinase Src), PI3K (phosphatidylinositol 4‐phosphate 3‐kinases)/Akt, and CaMK (calcium/calmodulin‐dependent protein kinases) pathways. According to our literature search, five studies have been published applying phosphoproteomics for studying AT 1 R signaling, , , , , four of which investigated signaling mechanisms of biased agonists. Generally, depending on the agonist applied, stimulation of GPCRs can result in activation of either the entire signalosome or only of a subset of signaling mechanisms. This phenomenon is known as biased agonism and was initially observed for the PACAP type I (PAC1) receptor and the muscarinic M1 receptor, and subsequently, also for several other GPCRs including the AT 1 R. , In case of the AT 1 R, biased ligands selectively activate (with different efficacy profiles) either G‐protein‐dependent pathways or β‐arrestin signaling. Before reviewing those phosphoproteomics studies which investigated AT 1 R biased signaling, we would like to review those two studies first, which looked at AT 1 R signaling in a general way. One of these studies examined AT 1 R signaling in AT 1 R‐transfected immortalized podocytes (AB8 3F‐ AT 1 R). Treatment with Ang II (100 nM, 15 mins) led to changes in the phosphorylation status of 6323 protein fragments that could be assigned to 2081 distinct proteins. As expected for a classical class A GPCR, phosphorylation events were more frequent than dephosphorylation events. Within the phosphorylated sites, the authors observed that the MAPK motif (proline at position +1) was enriched. This is consistent with substantial evidence in the literature that the MAPK pathway is involved in AT 1 R signaling. , Other proteins found to undergo large changes in their phosphorylation status were tenascin, integrin‐β6, neuroblast differentiation‐associated protein, LCP1 (L‐plastin), optineurin, plasminogen activator inhibitor 1, serine/threonine protein kinase D2, protein bicaudal C homolog 1, phalladin, and ephrin type‐A receptor. Gene ontology analysis of Ang II‐treated AB8 3F‐AT 1 R phosphoproteomics data revealed an enrichment of terms related to actin cytoskeleton and lamellipodia, among them the protein LCP1 (phosphorylated at Ser 5 ) which is a member of the α‐actinin family and important for actin assembly. Ang II‐induced phosphorylation of LCP1 at Ser 5 was validated by Western blot analysis and shown to be indeed AT 1 R‐mediated, since it was inhibited by the AT 1 R‐antagonist losartan. In further experiments using specific kinase inhibitors, the authors could show that Ang II‐induced phosphorylation of LCP1 was dependent on activation of ERK, RSK (ribosomal S6 kinase), PKC (protein kinase C) and PKA (cAMP‐dependent protein kinase), Finally, functional experiments demonstrated Ang II‐induced trafficking of LCP1 together with actin to the cell margins as well as Ang II‐induced formation of filopodia and cell–cell contacts that was dependent on Ser 5 ‐LCP1 phosphorylation. The authors compared the outcome of their phosphoproteomic study with a study from Jakob L Hansen's group, which investigated AT 1 R signaling by phosphoproteomics applying a (widely) identical protocol (100 nM Ang II for 3 and 15 mins) but in a different renal cell line, AT 1 R‐transfected human embryonic kidney (HEK)‐293 cells. The comparison revealed that 121 proteins which had increased phosphorylation levels in response to Ang II, were identical in both studies, whereas there were 323 phosphoproteins only detected in podocytes and 406 phosphoproteins only detected in AT 1 R‐HEK‐293 cells upon AT 1 R activation. Some of the HEK‐293 specific phosphoproteins may be attributable to the 3 mins stimulation, since apparently proteins from both stimulations (3 and 15 mins) were analyzed together, whereas in podocytes, only the 15‐min time‐point was investigated. Nevertheless, the important lesson from this comparison is that it is not possible to get a general picture of the AT 1 R‐coupled signaling network from a single study, since results will always be cell/tissue specific and differ from other cells/tissues. It should also be noted that both studies used transfected cells with an artificially high expression level of AT 1 Rs. This may have an impact on the results meaning that AT 1 R‐mediated signaling in primary cells with endogenous receptor expression may be different from signaling in overexpressing cell lines. Interestingly, the study by the Hansen group included a comparative phosphoproteomics approach in order to distinguish between G‐protein‐mediated and β‐arrestin‐mediated AT 1 R signaling by treating AT 1 R‐HEK‐293 either with the unbiased agonist Ang II (100 nM) or with the biased agonist [Sar 1 ,Ile 4 ,Ile 8 ]Ang II (SII Ang II; 18.7 μM) which activates Gα q protein‐independent (including β‐arrestin) signaling. The authors only included phosphosites with an increase (not a decrease) in phosphorylation level into further analysis. They found 1183 of such regulated phosphosites on 527 phosphoproteins, with 427 (36%) phosphosites regulated in response to SII Ang II meaning they are attributable to Gα q protein‐independent AT 1 R signaling. Further analysis of the data generally revealed a much more diverse and frequent abundance of Gα q protein‐independent AT 1 R signaling than previously thought. This included a considerable importance of the AGC/CAM kinase family, which includes for example PKD (protein kinase D), PKC and CaMKII, for both Ang II and SII Ang II‐induced signaling. Unexpectedly, it was noted that all PKD proteoforms were enriched in the dataset of AT 1 R‐HEK treated with SII Ang II coinciding with an increased phosphorylation of peptides with the consensus PKD phosphorylation motif. In further experiments using pharmacological inhibitors, the authors found that PKD activation by SII Ang II in AT 1 ‐HEK (i.e., Gα q protein‐independent) involved the Ras/ROCK (Rho‐associated protein kinase)/PKCδ pathway, whereas PKD activation by Ang II (Gα q protein‐dependent and ‐independent) also involved other PKCs. Other findings comprised Gα q protein‐dependence of activation of transcription factors such as c‐JUN (transcription factor Jun), HOXA3 (homeobox protein HOX‐A3), and EP400 (E1A‐binding protein p400), phosphorylation of proteins promoting migration and phosphorylation of other membrane receptors such as the insulin receptor, the insulin‐like growth factor 2 receptor or the β2‐adrenergic receptor, whereas Gα q protein‐independent signaling included reduced transcriptional activity in the nucleus and phosphorylation of CXC chemokine receptor 4 or fibroblast growth factor receptor 3 (among others). Phosphorylation of proteins involved in receptor endocytosis, anti‐apoptosis, cytoskeletal rearrangement and cell cycle control were found for both signaling mechanisms, although the exact proteins in each pathway were not identical. In the year of publication of the study by the Hansen group (2010), the group of Robert Lefkowitz also applied phosphoproteomics for the study of AT 1 R signaling using the exact same cell type (AT 1 R‐HEK‐293; the Lefkowitz group provided these cells to the Hansen group), but with a focus on Gα q protein‐independent/β‐arrestin‐dependent AT 1 R‐signaling by treating cells with SII Ang II only. The incubation time was 5 min and, therefore, similar to but not identical with the incubation times in the study by the Hansen group, which were 3 and 15 min. The dose of SII Ang II was slightly higher in the Lefkowitz study (30 μM) than in the Hansen study (18.7 μM). Using this approach, the authors identified 4552 phosphopeptides from 1555 phosphoproteins, of which 288 phosphopeptides met their rigorous definition of significance. In 222 phosphopeptides (from 171 phosphoproteins), phosphorylation levels were increased, and in 66 phosphopeptides (from 53 phosphoproteins), phosphorylation levels were decreased in response to the biased agonist SII Ang II. For verifying their experimental approach, the authors successfully confirmed 5 of the identified phosphoproteins by Western blotting. They further noted an overproportional abundance of kinases among the phosphopeptides (38 protein kinases) as, for example, ERK1, c‐Src, Akt, mTOR (mammalian target of rapamycin), and CAMK2, which they could (partly) confirm by additional bioinformatic analysis (Motif‐X, Kinase Enrichment Analysis KEA). In a further approach for analyzing the entire dataset, the authors applied a combination of bioinformatic tools including gene ontology (GO) analysis, Kyoto Encyclopedia of Genes and Genomes (KEGG) canonical pathway analysis, and Ingenuity Network Analysis and found an enrichment of terms related to actin cytoskeleton reorganization. Together with data from a previous study, which identified an β‐arrestin interactome by a global proteomics approach, the authors outlined an AT 1 R‐coupled, β‐arrestin‐dependent cytoskeletal reorganization subnetwork. A central role in this network played the slingshot phosphatase, which was found to be significantly dephosphorylated at Ser 937 and Ser 940 by SII Ang II treatment, which is an activation mechanism. Knockdown of β‐arrestin 1 and 2 by siRNA prevented SII Ang II‐induced slingshot activation thus showing β‐arrestin‐dependence of the effect. In a series of further, elegant experiments, the authors showed that slingshot dephosphorylates cofilin at Ser 3 , which is a mechanism related to activation of actin reorganization and lamellipodia formation. This AT 1 R‐induced effect seems to involve the formation of a β‐arrestin‐slingshot‐cofilin complex that may additionally contain the phosphatase PP2A (protein phosphatase 2), which is able to dephosphorylate and thus activate slingshot. Finally, the authors performed yet another series of bioinformatic analyses of their dataset, this time applying an inference algorithm and a literature‐based kinome network combined with known β‐arrestin‐regulated proteins and the results from the kinase prediction part of their study to construct an interconnecting network of AT 1 R‐ β‐arrestin mediated signaling events. This way they found that major areas of AT 1 R‐ β‐arrestin actions are the regulation of cell proliferation and cell cycle dynamics, cytoskeletal reorganization, adhesion and inter‐cellular communication. Although the two studies by the Hansen and the Lefkowitz groups had very similar objectives and designs, their results are only partially congruent with only ≈30% identical hits. One reason may be the different methods for phosphopeptide enrichment in the two studies, another the stricter criteria for significance in the Lefkowitz study. However, the difference is also an expression of the fact that there is a risk of false‐positive or false‐negative hits in the (phospho)‐proteomics datasets. Nevertheless, and importantly, the major functional areas, which were predicted to be modulated by AT 1 R‐ β‐arrestin signaling in the Hansen and Lefkowitz studies, were widely identical. A third study by Louis Luttrell's group also investigated SII Ang II‐induced AT 1 R‐ β‐arrestin signaling and Ang II‐induced global AT 1 R signaling by phosphoproteomics. As the Hansen/Lefkowitz studies, the authors used AT 1 R‐HEK‐293 cells treated with SII Ang II (50 μM) or with Ang II (100 nM). The incubation time was 5 min. This study revealed much less phospho‐modified proteins than the other two for methodological reasons—use of two‐dimensional gel electrophoresis (2DGE) and matrix‐assisted laser desorption/ionization mass spectrometer (MALDI‐MS) instead of liquid chromatography coupled to electrospray ionization mass spectrometer (LC‐ESI‐MS). The authors identified 36 phosphoproteins, of which 16 were only modified after SII Ang II meaning they are part of the AT 1 R‐ β‐arrestin axis. Two peptide inhibitors of protein phosphatase 2A (I1PP2A/I2PP2A) and prostaglandin E synthase 3 (PGES3) were selected for further validation. Additional co‐immunoprecipitation studies suggested the existence of I2PP2A/PP2A/Akt‐β‐arrestin and PGSE3‐β‐arrestin complexes. Phosphorylation of I2PP2A within the β‐arrestin/I2PP2A/PP2A/Akt complex led to inhibition of PP2A activity and subsequently to activation of Akt through Thr 308 dephosphorylation. Furthermore, the authors reported formation of a β‐arrestin‐PGSE3 complex in response to SII Ang II which was responsible for increased PGE 2 production. This effect could be abolished by knocking down β‐arrestin. The study of the Luttrell group was of particular importance because some of the findings (SII Ang II‐induced I2PP2A phosphorylation and PGE 2 synthesis) in the AT 1 R‐HEK‐293 cell line were confirmed in primary cells of the cardiovascular system, namely in vascular smooth muscle cells, whereas the other two studies were entirely performed in the artificial system of AT 1 R‐overexpressing HEK‐293 cells. None of the studies investigated any functional (cardiovascular) effects in ex vivo or in vivo experiments such as SII Ang II‐induced vasorelaxation through PGE 2 or through Akt‐mediated eNOS activation. However, increased PGE 2 production in response to the AT 1 R‐β‐arrestin‐biased agonist Des‐Asp 1 ‐Ang I was shown in human umbilical vein endothelial cells in a subsequent study by another group thus pointing to induction of a vasorelaxant mechanism by AT 1 R‐β‐arrestin signaling. In addition to the above studies, which looked at the entire AT 1 R‐coupled signaling network, a study by Gareri and co‐authors took a more targeted approach and specifically looked at changes in phosphorylation of the C‐terminal tail of the AT 1 R in response to biased (TRV023) and unbiased (Ang II) agonists. For this purpose, FLAG‐tagged human AT 1 Rs were enriched from HEK‐293 cell lysates using FLAG‐tag affinity chromatography and, subsequently, phosphoproteomics performed on the purified receptor. Applying this unique approach, the authors indeed identified different phosphorylation patterns (so‐called barcodes) of the AT 1 R C‐terminal tail in response to the biased or unbiased agonist, respectively. A major finding of the study was that for full β‐arrestin recruitment, phosphorylation of a certain cluster of serine and threonine residues in the proximal and middle portions of the tail was necessary. The authors concluded that binding of biased or unbiased agonists triggers different receptor conformations thus inducing divergent phosphorylation patterns at the C‐terminus of the receptor. Interestingly, a few years after the above‐reviewed phosphoproteomics studies on AT 1 R‐β‐arrestin‐biased signaling, the Lefkowitz group was able to show that biased or unbiased AT 1 R agonists stabilize the AT 1 R in distinct receptor conformations, which explains the different types of signaling mechanisms elicited by G‐protein‐ or ‐β‐arrestin‐coupled receptor activation. Figure illustrates the main findings of AT1R signaling using phosphoproteomics. THE AT 2 RECEPTOR As the AT 1 R, the AT 2 R is categorized as a class A G‐protein‐coupled receptor. However, signaling of the AT 2 R as determined by conventional methods and phosphoproteomics (the latter reviewed in detail in the following) is fundamentally different from classical GPCRs such as the AT 1 R, which made some researchers conclude that the AT 2 R may represent a distinct subclass of class A GPCRs. For example, the AT 2 R does not signal through G q and G 11/12 pathways, it does not recruit or signal through β‐arrestin and it is not internalized. , Instead, it signals through coupling to Gα i/o —which, however, does not lead to a decrease in cAMP formation as usual for other GPCRs —or it signals through G‐proteinindependent mechanisms such as coupling to the AT 2 R‐interacting protein (ATIP). , Studies on AT 2 R signaling by low‐throughput techniques consistently showed that upon agonist binding, the AT 2 R activates protein phosphatases such as SHP‐1 [Src homology region 2 (SH‐2) domain‐containing phosphatase 1], PP2A and MKP‐1 (MAPK phosphatase‐1). , , These activated protein phosphatases interfere with other kinase‐driven signaling pathways in an inhibitory way. For example, PP2A‐ and Gα i ‐dependent dephosphorylation of ERK‐2 leads to inhibition of insulin‐induced ERK1/2 signaling. AT 2 R signaling can also involve kinase activation like for example Akt, which is phosphorylated at the activating residue Ser 473 in response to AT 2 R stimulation. , Akt promotes eNOS activation through phosphorylation of eNOS‐Ser 1177 , which ultimately increases NO release by endothelial cells. In addition to eNOS‐Ser 1177 phosphorylation, eNOS activation by the AT 2 R also involves dephosphorylation of eNOS by phosphatases. The above‐reviewed signaling pathways—and others reviewed elsewhere —promote the classical effects of AT 2 R activation such as natriuresis, vasodilation, , anti‐inflammation, and antiproliferation, , as illustrated in Figure . The first study deploying time‐resolved, quantitative phosphoproteomics for the study of AT 2 R signaling used an untargeted approach for investigating early changes in the phosphorylation pattern of primary human aortic endothelial cells (HAEC) in response to short‐term (up to 20 min) AT 2 R activation by the small molecule agonist compound 21 (C21). Unexpectedly, the study revealed that in contrast to the prevailing notion that AT 2 R signaling is mainly driven by phosphatase activation, the frequency of kinase‐driven phosphorylation events was slightly higher. Kinase prediction identified the involvement of Akt in these phosphorylations, and also kinases that are known to activate phosphatases. In order to identify novel AT 2 R‐coupled signaling pathways with this hypothesis‐generating approach, proteins with modified phosphorylation levels were first analyzed by gene ontology (GO), a bioinformatic method for categorizing genes/proteins according to their molecular function, cell compartments or biological processes, followed by STRING analysis for identification of functional protein networks. These analyses unveiled an enrichment of terms related to cell proliferation and apoptosis. Within these terms, the authors selected, HDAC1 (histone deacetylase‐1), which was dephosphorylated following C21 treatment at Ser 421/423 (as subsequently confirmed by Western blotting) and which took a central position in the STRING‐analysis cluster related to proliferation/apoptosis. The authors used this result derived from the untargeted approach to further explore a potential, novel, AT 2 R‐induced signaling pathway that is initiated by AT 2 R‐induced Ser 421/423 ‐HDAC1 dephosphorylation in a targeted approach. They could eventually show that AT 2 R‐induced HDAC1 dephosphorylation attenuates its deacetylase activity leading to lessened deacetylation of the tumor suppressor p53, which is an activation mechanism that leads to nuclear translocation of p53 and culminates in antiproliferative and anti‐apoptotic effects of AT 2 R activation—functionally shown in this study in HAEC and in PC9, a non‐small lung cancer cell line. In a second study with a similar protocol (up to 20 min AT 2 R stimulation in HAEC) but an improved MS methodology with higher sensitivity, the same authors used the newly identified endogenous AT 2 R agonist Ang‐(1–5) for receptor activation. In this analysis and in contrast to the study with C21 reviewed above, dephosphorylations were slightly prevailing over phosphorylations. This difference may be due to the improved methodology in the 2nd study, which allowed the detection of many more sites with changes in phosphorylation status than the 1st study—including tyrosine phosphorylations, which could not be detected by the methodology of the 1st study, but which play an important role in AT 2 R signaling as was already detected by conventional methods years ago. Another reason for the slightly different result of the two studies in terms of the phosphorylation/dephosphorylation ratio may be that C21 and Ang‐(1–5) act as biased agonists and do not elicit the exact same array of signaling cascades. Importantly, despite these differences in the phosphorylation pattern, both phosphoproteomic studies clearly point to tissue protective, antiproliferative actions of the AT 2 R. In the study with Ang‐(1–5) as AT 2 R agonist, this was evident from performing a KEGG pathway analysis of the data, which detects enrichment of phospho‐modified proteins within defined signaling pathways pointing to activation or inhibition of these pathways by the applied agonist. In case of AT 2 R activation by Ang‐(1–5), KEGG pathway analysis revealed inhibition of VEGF (vascular endothelial growth factor) and HIF‐1 (hypoxia‐inducible factor‐1) signaling, inhibition of leucocyte transendothelial migration as well as effects on the actin cytoskeleton and on adhesion. These results still await confirmation by a 2nd method and by functional tests in future studies. THE Mas RECEPTOR As the AT 2 R, the MasR, which is the main receptor for Ang‐(1–7), is a class A GPCR with unconventional signaling mechanism as defined by conventional methods. Interestingly, MasR and AT 2 R signaling mechanisms have a lot of similarities. For example, as described for the AT 2 R in the preceding section, MasR‐mediated vasodilation induced by Ang‐(1–7) is resulting from an increase in NO release. , , Studies using classical approaches have shown that Ang‐(1–7)‐induced NO release involves a rapid and long‐lasting phosphorylation of eNOS at Ser 1177 after 5 to 30 min of treatment resulting in eNOS activation and NO production as shown in HAEC and MasR‐transfected CHO cells. , Western blotting further revealed that Akt, a kinase that phosphorylates eNOS at Ser 1177 , was phosphorylated at its activation site (Ser 473 ) following 5 min of Ang‐(1–7) treatment via the PI3K‐Akt pathway. The role of MasR in this process was confirmed using the selective MasR‐antagonist A779, and by the absence of the effect in non‐transfected CHO cells. A crosstalk has been described between Ang‐(1–7)/MasR signaling and insulin/insulin receptor (IR) signaling. In brief, Ang‐(1–7)/MasR increases the expression of insulin, and induces beneficial outcomes in insulin resistance and metabolic syndrome experimental models. , , , , Furthermore, Ang‐(1–7)/MasR signaling and Insulin/IR signaling share important effectors like PI3K, Akt, GSK‐3β (glycogen synthase kinase‐3 beta), IRS‐1 (insulin receptor substrate‐1) and JAK2. , , Another important aspect of Ang‐(1–7)/MasR signaling is the inhibition of pathways activated by Ang II/AT 1 R explaining, at least in part, the counter‐regulatory effects of Ang‐(1–7) against Ang II effects (Figure ). It has been shown in different models that Ang‐(1–7)/MasR induces the dephosphorylation and inhibition of key effectors of Ang II/AT 1 R signaling including ERK1/2, c‐Src, p38 MAPK, JNK (jun N‐terminal kinase), NF‐κB, STAT3, Akt, PKC‐α, GSK‐3β, and NADPH (nicotinamide‐adenine dinucleotide phosphate). , The dephosphorylation of components of the MAPK/ERK pathway by Ang‐(1–7)/MasR involves activation of the phosphatases SHP‐2 and MKP‐1. , A work published in 2012 was the first publication and the only one thus far applying phosphoproteomics to study Ang‐(1–7)/MasR signaling. The study focused on early phosphorylation events in HAEC (up to 20 min after Ang‐(1–7) stimulation). A total of 1288 unique phosphorylation sites on 699 proteins were identified. Of these, the phosphorylation levels of 121 sites on 79 proteins were reported to change significantly in response to the treatment, thus identifying potential components of Ang‐(1–7)/MasR signaling pathways in HAEC. This study supports the potential interplay between Ang‐(1–7)/MasR signaling and insulin/IR signaling as eight of the identified phosphoproteins are also components of insulin/IR signaling: Akt, AKTS1 (proline‐rich AKT1 substrate 1), CAV1 (caveolin‐1), FOXO‐1 (forkhead box protein O1), MAPK1, PXN (paxillin), PIK3C2A (phosphatidylinositol 4‐phosphate 3‐kinase C2 domain‐containing subunit alpha), and VIM (vimentin). The shared phosphoproteins represent approximately 10% of all proteins identified as differentially phosphorylated/dephosphorylated in response to Ang‐(1–7) treatment. In this study, FOXO‐1 was selected for further confirmatory experiments. FOXO‐1 is a transcription factor that undergoes Akt‐induced phosphorylation at Thr 24 , Ser 256 , and Ser 319 . Phospho‐FOXO‐1 is localized in the cytoplasm and is translationally inactive. However, upon its dephosphorylation, FOXO‐1 is translocated into the nucleus and becomes transcriptionally active. Following 5 min of Ang‐(1–7)/MasR stimulation, a significant dephosphorylation of FOXO‐1‐Ser 256 was revealed by phosphoproteomics. Functional validation by confocal microscopy confirmed that Ang‐(1–7) led to nuclear accumulation of FOXO‐1 in HAEC. The identification of FOXO‐1 as an important downstream component of Ang‐(1–7)/MasR signaling is an example of the potential of untargeted phosphoproteomics in generating new hypotheses. As mentioned before in this review, Ang‐(1–7) induces the activation of PI3K‐Akt signaling in HAEC (Figure ). Since PI3K‐Akt signaling has been reported to lead to the phosphorylation of FOXO‐1, resulting in its inactivation and cytoplasmic accumulation, the observed dephosphorylation and nuclear accumulation was against expectations and would probably not have been found with a targeted approach (Figure ). The finding of Ang‐(1–7)/MasR induced FOXO‐1 activation by this study initiated a number of follow‐up studies investigating the role of FOXO‐1 for Ang‐(1–7)/MasR signaling and actions by hypothesis‐driven approaches. Another example of the use of MS‐based technologies for studying Ang‐(1–7)/MasR signaling is an interesting study by Hoffmann et al. in rat microvascular endothelial cells (RMVECs), which employed a combination of immunoprecipitation of MasR in native conditions to co‐precipitate its interacting proteins before and after stimulation with Ang‐(1–7) followed by the MS‐based identification of the MasR interacting proteins. A total of 50 proteins co‐precipitated with MasR including AT 1 R, mTOR, PRKD1 (serine/threonine protein kinase D1), RASGRF1 (ras‐specific guanine nucleotide‐releasing factor 1), TRPM6 (transient receptor potential cation channel subfamily M member 6), and GRIP1 (glutamate receptor‐interacting protein 1). In addition to identifying new interaction partners of the MasR, the study also confirmed heterodimerization of the MasR with the AT 1 R, which is one of several heterodimers described for RAS receptors. MasR/AT 1 R heteromerization negatively modulates Ang II/AT 1 R signaling, for example by inhibiting AT 1 R‐induced inositol phosphate generation and intracellular Ca 2+ increase. THE MrgD RECEPTOR MrgD is a member of the Mas‐related G‐protein‐coupled receptor family and of the protective axis of the RAS with Alamandine as its primary ligand. β‐alanine and GABA have been described as MrgD ligands too, though GABA is a low‐affinity MrgD agonist. A structural study of MrgD complexed with β‐alanine was recently published using cryo‐electron microscopy (Cryo‐EM). β‐alanine binds to a shallow pocket close to the extracellular loop 2 (ECL2), surrounded by TM3, TM4, TM5, and TM6 transmembrane (TM) domains. The β‐alanine/MrgD complex is stabilized by electrostatic interactions between the β‐alanine carbonyl group (C=O) with Arg 103 (TM3) and Asp 179 (TM5). Hydrogen bounds stabilize interactions of β‐alanine with Cys 164 (TM5) and Trp 241 (TM6). It is possible that Alamandine binds to the same site as β‐alanine because effects of Alamandine are abolished by a pre‐treatment with β‐alanine, suggesting that both ligands compete for the same site. However, it cannot be ruled out that Alamandine binds to a different site and that the observed β‐alanine “antagonistic” effect is due to an allosteric conformational change rather than a site competition or that Alamandine binds to the same site but with different interaction partners within the receptor pocket. Thus, an investigation of MrgD complexed with Alamandine is still warranted. As the other protective RAS receptors, AT 2 R and MasR, MrgD mediates the induction of NO production. However, at least in cardiomyocytes, the signaling mechanism leading to Alamandine/MrgD‐induced NO synthesis seems different and includes the activation of the LKB1 (serine–threonine liver kinase B1)/AMPK (AMP‐activated protein kinase) pathway in a PI3K/Akt‐independent fashion. The LKB1/AMPK pathway seems also crucial for the MrgD‐mediated prevention of the hypertrophic effect induced by Ang II/AT 1 R in neonatal rat cardiomyocytes. This observation was confirmed in an in vivo transverse aortic constriction (TAC) model of cardiac hypertrophy in mice. TAC led to the dephosphorylation of AMPK‐Thr 172 , but Alamandine via MrgD restored AMPK‐Thr 172 phosphorylation, which is consistent with AMPK activation. Other signaling pathways and cellular events associated with the cardioprotective effect induced by Alamandine/MrgD in the TAC model, as identified by conventional methods, included the dephosphorylation and consequent inhibition of ERK1/2‐Thr 202 /Tyr 204 , phosphorylation of PLN (cardiac phospholamban)‐Thr 17 , and reduced expression of MMP‐2 (matrix metallopeptidase 2). Regarding TAC‐induced ROS production, Alamandine/MrgD decreased the expression of a subunit of NADPH oxidase (gp91phox) and increased the expression of SOD2 (superoxide dismutase 2, mitochondrial) and CAT (catalase). The MrgD‐coupled signaling network induced by Alamandine was explored by untargeted phosphoproteomics complemented with antibody‐based approaches in the context of a study that investigated a potential MrgD‐dependent antiproliferative and anti‐cancer effect in the human pancreatic cancer cell lines Mia PaCa‐2 and A549 and in MrgD‐transfected CHO cells (MrgD‐CHO). Phosphoproteomics of CHO‐MrgD stimulated by Alamandine (up to 20 min) identified similar signaling pathways with potential tissue protective outcomes as the phosphoproteomics studies for the AT 2 R , and MasR, comprising the inhibition of the pathways PI3K/Akt/mTOR and BRAF/MKK/ERK1/2, as well as the activation of FOXO‐1 and p53. Of note, the phosphoproteomic experiments exploring the antitumoral effect of Alamandine in Mia PaCa‐2 cells focused on later time points (up to 48 h) than all other RAS receptor phosphoproteomics studies. These incubation times were chosen because the antiproliferative effects elicited by Alamandine were only observed after 2 days of treatment. The authors reported that Alamandine induced a significant change in the phosphorylation of proteins associated with cytoskeleton regulation, potentially reducing their capability of cellular migration. It was also reported that Alamandine/MrgD activation led to dephosphorylation and consequent inhibition of key proteins associated with cell division, such as EIF3B (eukaryotic translation initiation factor 3 subunit B) at Ser 85 /Ser 119 and EIF4B at Ser 422 /Ser 498 /Thr 500 /Ser 504 . THE AT 4 RECEPTOR/ IRAP Unlike the other RAS receptors AT 1 R, AT 2 R, MasR, and MrgD, which are seven‐transmembrane (7TM) G‐protein‐coupled receptors (GPCRs), the AT 4 R/IRAP is a transmembrane M1 zinc aminopeptidase (1TM). , The receptor has a broad tissue distribution including expression in the brain, heart, kidneys, adrenal glands, and blood vessels. Ang IV binds to the IRAP catalytic site with high affinity reducing its ability to degrade neuropeptides like vasopressin, oxytocin, kallidin, somatostatin, among others. , Classical experiments have shown that Ang IV modulates different signaling pathways depending on cell type or tissue, some of which could be inhibited by AT 1 R or AT 2 R antagonists and thereby attributed to activation of these receptors. However, the important beneficial effects of Ang IV on cognition (and others) seem AT 1 R/ AT 2 R‐independent, but AT 4 R/IRAP‐dependent. Signaling of Ang IV through IRAP is still not entirely understood and may involve effects of the accumulated IRAP substrates or direct signaling effects of IRAP. , To gain more insights into potential signaling pathways elicited by Ang IV/AT 4 R/IRAP, Wang et al. employed phosphoproteomics on N2A cells (mouse neuroblasts) treated or not with Ang IV for 30 min. In their publication, the authors focus the analysis of their data entirely on the dephosphorylation of the alpha catalytic subunit of the phosphoprotein phosphatase 1 (PP1α‐Thr 320 ), which is an activation mechanism. In line with that, PP1α downstream substrates were found dephosphorylated, suggesting its important role in signaling in neuronal cells. Finally, the authors observed Ang IV‐induced G1/S cell arrest, which they attributed to the increased activity of PPP1α. COMMON RAS SIGNALING COMPONENTS Even though the number of phosphoproteome studies investigating RAS receptor signaling is still limited, it is, nevertheless, striking that the studies using untargeted approaches looking at receptors of the protective axis of the RAS identified widely similar signaling pathways thus creating a kind of a “déjà vu” experience. Analyzing four different phosphoproteome datasets from AT 2 R, , MasR, and MrgD, we observed a remarkable overlap of regulated phosphorylation events in response to short‐term agonist stimulation. Figure illustrates some key signaling effectors shared by MasR, AT 2 R and MrgD according to the phosphoproteomics studies. For example, activation of all three receptors induced: FOXO‐1 dephosphorylation and consequent activation, p53 dephosphorylation and consequent activation, HDAC dephosphorylation and consequent inhibition, and ERK1/2 dephosphorylation and consequent inhibition. Akt and AKT1S1 (proline‐rich Akt1 substrate 1) were phospho‐modified in the same way by the MasR and the AT 2 R (phosphorylation/activation of Akt; dephosphorylation / inhibition of AKT1S1) whereas MrgD activation induced opposing effects (dephosphorylation / inhibition of Akt; phosphorylation/activation of AKT1S1). AMPK phosphorylation / activation was only observed for MrgD and AT 2 R (but not MasR) signaling, whereas MAPK1 dephosphorylation/inhibition was only detected for AT 2 R and MasR signaling. Surprisingly, C21‐induced AT 2 R activation led to ERK1/2 phosphorylation and consequent activation, while Ang‐(1–5)‐induced AT 2 R activation led to ERK1/2 dephosphorylation and consequent inhibition. However, C21‐induced ERK1/2 activation happened very early (after 1 min), whereas Ang‐(1–5)‐induced ERK1/2 inhibition occurred only after 20 min, which may indicate that these events are not part of the same signaling pathway and biological process. ERK1/2 activation can mediate a multitude of different biological effects such as phosphatase activation (a potentially protective mechanism) at very early time points or promotion of pro‐inflammatory and pro‐fibrotic pathways at later time points. WHAT TO CONSIDER WHEN DOING PHOSPHOPROTEOMICS 10.1 Cell lines and animal models Untargeted phosphoproteomics relies on protein databases to identify (phospho)‐proteins in samples. There are two main types of protein databases: those containing unreviewed proteins (e.g., UniProtKB/TrEMBL) and those with reviewed proteins (e.g., UniProtKB/Swiss‐Prot). Unreviewed proteins are “computationally annotated”, while reviewed proteins are “manually annotated”, which is preferable since the results are more reliable. As of September 2024, the UniProtKB/Swiss‐Prot database included 26821 reviewed proteins from Homo sapiens (human), 17823 from Mus musculus (mouse), 8304 from Rattus norvegicus (rat), and 247 from Cricetulus griseus (Chinese hamster). Thus, the choice of cell lines and animal models can significantly impact (phospho)‐proteomics results, since the size of reference databases differs between species. Therefore, the choice of species is critical, and samples from humans or mice are generally preferred over other species for (phospho)‐proteomic studies. However, samples from less commonly used species can still be valuable under certain circumstances. For example, the CHO cell line originating from Chinese hamster ( C. griseus ) is often used for transfection and expression of RAS receptors (MasR, AT 1 R, AT 2 R, or MrgD) because it does not constitutively express these receptors, which means that non‐transfected cells can serve as perfect negative controls. Rat models such as spontaneous hypertensive rats (SHR) and transgenic rats are also widely employed in RAS research and often the optimal model for studying certain diseases. For species with a limited number of annotated proteins in a reviewed database, researchers may use the UniProtKB/TrEMBL database of unreviewed proteins. As of September 2024, it contained 83438 proteins for C. griseus and 100383 for R. norvegicus . However, the fact that these proteins are only computationally annotated needs to be kept in mind, conclusions done with more caution and where possible validated by additional experiments. For phosphoproteome studies, availability of data about the role of phosphorylation/dephosphorylation of certain residues (e.g., whether phosphorylation leads to activation or inactivation of a protein) is even more limited, though there are specific databases like the PhosphoSitePlus database ( https://www.phosphosite.org ) that can be used to interrogate specific phosphorylation sites. There are also algorithms that use experimental datasets to predict active kinases (e.g., KSTAR ) and active signaling pathways (e.g., phuEGO ). Nevertheless, interpretation of untargeted phosphoproteomic data can be difficult, and it may be necessary to limit follow‐up studies to only those identified phosphoproteins for which information is available in databases. 10.2 Selectivity of ligands Phosphoproteomics as reviewed in this article serves to unravel signaling mechanisms induced by the activation of a receptor by a respective agonist. Since phosphoproteomics is a highly sensitive technique, it is crucial to verify in advance whether the agonist to be used is highly selective for the targeted receptor. Since ligand selectivity is also a matter of dosing (every ligand loses selectivity at some point when increasing the dose/concentration), it is also essential to choose a dose/concentration for the agonist at which the agonist binds to and activates exclusively the target of interest. Data on selectivity of a certain ligand often only exist for a restricted number of potential off‐targets—if at all. Therefore, there will always be some remaining uncertainty whether all observed effects can really be attributed to the interaction of the agonist with the target of interest. Thus, control experiments, for example with antagonists or in cells/animals, which do not express the receptor of interest, are essential to control for off‐target effects. Cell lines and animal models Untargeted phosphoproteomics relies on protein databases to identify (phospho)‐proteins in samples. There are two main types of protein databases: those containing unreviewed proteins (e.g., UniProtKB/TrEMBL) and those with reviewed proteins (e.g., UniProtKB/Swiss‐Prot). Unreviewed proteins are “computationally annotated”, while reviewed proteins are “manually annotated”, which is preferable since the results are more reliable. As of September 2024, the UniProtKB/Swiss‐Prot database included 26821 reviewed proteins from Homo sapiens (human), 17823 from Mus musculus (mouse), 8304 from Rattus norvegicus (rat), and 247 from Cricetulus griseus (Chinese hamster). Thus, the choice of cell lines and animal models can significantly impact (phospho)‐proteomics results, since the size of reference databases differs between species. Therefore, the choice of species is critical, and samples from humans or mice are generally preferred over other species for (phospho)‐proteomic studies. However, samples from less commonly used species can still be valuable under certain circumstances. For example, the CHO cell line originating from Chinese hamster ( C. griseus ) is often used for transfection and expression of RAS receptors (MasR, AT 1 R, AT 2 R, or MrgD) because it does not constitutively express these receptors, which means that non‐transfected cells can serve as perfect negative controls. Rat models such as spontaneous hypertensive rats (SHR) and transgenic rats are also widely employed in RAS research and often the optimal model for studying certain diseases. For species with a limited number of annotated proteins in a reviewed database, researchers may use the UniProtKB/TrEMBL database of unreviewed proteins. As of September 2024, it contained 83438 proteins for C. griseus and 100383 for R. norvegicus . However, the fact that these proteins are only computationally annotated needs to be kept in mind, conclusions done with more caution and where possible validated by additional experiments. For phosphoproteome studies, availability of data about the role of phosphorylation/dephosphorylation of certain residues (e.g., whether phosphorylation leads to activation or inactivation of a protein) is even more limited, though there are specific databases like the PhosphoSitePlus database ( https://www.phosphosite.org ) that can be used to interrogate specific phosphorylation sites. There are also algorithms that use experimental datasets to predict active kinases (e.g., KSTAR ) and active signaling pathways (e.g., phuEGO ). Nevertheless, interpretation of untargeted phosphoproteomic data can be difficult, and it may be necessary to limit follow‐up studies to only those identified phosphoproteins for which information is available in databases. Selectivity of ligands Phosphoproteomics as reviewed in this article serves to unravel signaling mechanisms induced by the activation of a receptor by a respective agonist. Since phosphoproteomics is a highly sensitive technique, it is crucial to verify in advance whether the agonist to be used is highly selective for the targeted receptor. Since ligand selectivity is also a matter of dosing (every ligand loses selectivity at some point when increasing the dose/concentration), it is also essential to choose a dose/concentration for the agonist at which the agonist binds to and activates exclusively the target of interest. Data on selectivity of a certain ligand often only exist for a restricted number of potential off‐targets—if at all. Therefore, there will always be some remaining uncertainty whether all observed effects can really be attributed to the interaction of the agonist with the target of interest. Thus, control experiments, for example with antagonists or in cells/animals, which do not express the receptor of interest, are essential to control for off‐target effects. REMAINING KNOWLEDGE GAPS Although the above‐reviewed MS‐based phosphoproteome studies provided major insights into RAS‐associated signaling mechanisms, some “puzzle stones” are still missing for a global understanding of the RAS signaling networks. For AT 1 R signaling, for example, none of the phosphoproteomic studies used cells which endogenously express AT 1 R. However, AT 1 R signaling patterns have been thoroughly characterized by low‐throughput techniques (reviewed elsewhere ref ) using cells or tissues endogenously expressing the receptor, and most of the findings from phosphoproteomics in transfected cells are in concordance with findings from these low‐throughput studies. Whether, and to which extent, additional signaling mechanisms identified in the phosphoproteomics studies using transfected cells, which are not “backed up” by conventional studies, are also relevant in models endogenously expressing AT 1 Rs remains to be investigated. To date, phosphoproteome‐based studies of RAS signaling have primarily relied on simplified systems such as primary cells (e.g., HAEC) or transfected cell lines expressing specific receptors (e.g., CHO‐MrgD, CHO‐AT 2 R). While these models provide a controlled environment to dissect receptor‐specific pathways and downstream effectors, they lack the physiological complexity. Investigating RAS signaling in more complex systems, such as whole organisms or tissue‐specific models would provide critical insights into the biological relevance of these signaling pathways. Such studies could determine whether the effectors identified in vitro are similarly modulated in vivo, where the interplay of multiple cell types, tissue environments, and systemic factors could influence the signaling dynamics. Moreover, in vivo phosphoproteomics could reveal novel effectors and pathway regulations that are not evident in isolated cell models, advancing our understanding of RAS biology and its role in health and disease. Phosphoproteomics are a potential tool for comparing “shared” versus “unique” signaling patterns in different cell types/conditions. For example, Schenk and coworkers reported for AT 1 Rs substantial differences between Ang II‐induced signaling in HEK versus AB8/13 cells, both with exogenous AT 1 R expression. The same approach could also be used in cells/organisms with endogenous AT 1 R expression to unveil system bias (differences in signaling between different cells/tissues) or differences in signaling between normal and diseased conditions. Furthermore, the use of biased AT 1 R agonists in this setup would allow distinguishing between G‐protein‐ and β‐arrestin‐dependent signaling patterns involved in physiological processes in different cells and/or in the progression of diseased states. What is indeed still much warranted is the characterization of the signaling pathways elicited either by G‐protein‐ or by β‐arrestin‐biased ligands in systems endogenously expressing the AT 1 R. Such research has been hampered in the past by the unavailability of the respective biased AT 1 R agonists. The G‐protein‐biased AT 1 agonist TRV055 became only recently available (first publication in 2019 ). β‐arrestin‐biased AT 1 R agonists have been available for longer with the first, [[Sar 1 ,Ile 4 ,Ile 8 ]Ang II (SII Ang II)], published in 2003. Therefore, the initial approach to study G‐protein‐coupled versus β‐arrestin‐coupled AT 1 R signaling was a comparison of signaling cascades elicited by the balanced full agonist Ang II with those elicited by the β‐arrestin‐biased partial agonist SII Ang II. , In this approach, the overlapping signaling components represent β‐arrestin‐dependent signaling pathways, whereas signaling components activated by Ang II only (but not by SII Ang II) constitute G‐protein‐dependent signaling pathways. As SII Ang II is a low‐affinity, partial β‐arrestin‐biased AT 1 R agonist with some residual G‐protein activation capability that becomes apparent particularly in AT 1 R overexpressing cells, and since AT 1 R overexpressing cells have been the standard model for studies on biased AT 1 R signaling so far, it is likely that existing data on AT 1 R β‐arrestin‐dependent signaling have some inaccuracies. Thus, a systematic phosphoproteomic investigation of cells with endogenous AT 1 R expression treated with the now available optimized biased AT 1 R agonists such as TRV055 (for G‐protein‐biased signaling) and TRV027 (for β‐arrestin‐biased signaling) would accurately characterize AT 1 R signaling through the two major receptor activation mechanisms. Another area which has hardly been investigated is the characterization of signaling pathways elicited by RAS receptor heterodimers. RAS receptors form heterodimers with other receptors of the RAS (e.g., AT 1 R‐AT 2 R, AT 2 R‐Mas) or with non‐RAS receptors (e.g., AT 1 R‐B 2 bradykinin B2 receptor), AT 1 R‐β‐adrenergic receptors. This is important because heterodimerization can change receptor conformations and, thereby, receptor signaling. This has potential clinical relevance, for example due to the phenomenon of cross‐inhibition, which means that one antagonist (e.g., an ARB) inhibits signaling of the dimerized other receptor (e.g., a β1‐adrenergic receptor). The AT 2 R, MasR and MrgD have been described to be constitutively active, i.e., they elicit intracellular signaling on a low level without agonist binding. A further potential area of phosphoproteomics could be to determine whether constitutive signaling patterns differ from agonist‐induced signaling. Phosphoproteomic‐based studies on the signaling mechanisms elicited by several RAS components including (pro‐)renin/PRR, Ang‐(1–12), Ang‐(1–9), Ang A, and Ala‐(1–5) have not been performed yet. For some of these components, detailed knowledge of the signaling mechanisms may also help to identify the responsible receptor. Such studies may also clarify whether biased agonism only exists for the AT 1 R, that is, the classical arm of the RAS, or whether it can be found in receptors of the protective RAS as well. Finally, our understanding of RAS signaling could be significantly advanced through the integration of multi‐omics approaches. For instance, while phosphoproteomics offers critical insights into phosphorylation events and their roles in signaling cascades, examining other PTMs (e.g., glycosylation, methylation, and acetylation) could provide a more comprehensive view of the molecular mechanisms underlying RAS activity (e.g., regulation of gene expression and epigenetics). Furthermore, combining phosphoproteomics with metabolomics and lipidomics could reveal how RAS signaling pathways interact with cellular metabolism. From an in vivo perspective, recent advancements in single‐cell transcriptomics and single‐cell proteomics offer unprecedented opportunities to study RAS signaling at the resolution of individual cells. These techniques enable the characterization of cell‐type‐specific signaling dynamics and the identification of heterogeneous responses to RAS stimuli within complex tissues. CONCLUSIONS Phosphoproteomics is a powerful technique for quantifying phosphorylation events in an unbiased manner and has proven invaluable for studying signaling pathways across numerous receptor systems. However, its application in the context of RAS‐related signaling pathways remains surprisingly underexplored. There is significant potential to utilize phosphoproteomics for investigating the signaling cascades of emerging RAS components, such as Ang‐(1–5) and Ala‐(1–5), to study biased agonism within the RAS, and to explore how heterodimerization of RAS receptors impacts cellular signaling networks. With recent advancements enabling the identification of tens of thousands of phosphorylation sites per experiment, a comprehensive re‐examination of RAS receptor signaling is warranted, as new effectors and regulatory mechanisms are likely to emerge. Moreover, extensive datasets containing thousands of phosphorylated proteins modulated by RAS hormones are available in public repositories (e.g., PRIDE, Peptide Atlas, MassIVE, iProX) through the ProteomeXchange Consortium ( https://www.proteomexchange.org ). These datasets are often only partially analyzed in the original studies and, therefore, can be regarded as “goldmines” which offer opportunities for re‐analysis or meta‐analysis to identify signaling effectors which were previously overlooked or not explored in detail in the original studies. By revisiting these datasets with focused questions, researchers can extract valuable new insights from the data, broadening our understanding of RAS biology and potentially uncovering novel therapeutic targets. Igor Maciel Souza‐Silva: Conceptualization; writing – original draft; writing – review and editing. Victor Corasolla Carregari: Writing – original draft. U. Muscha Steckelings: Conceptualization; funding acquisition; writing – original draft; writing – review and editing; supervision. Thiago Verano‐Braga: Conceptualization; funding acquisition; writing – review and editing; writing – original draft; supervision. T.V.‐B. received funding from CNPq (406936/2023‐4; 309965/2022‐5), CAPES‐Finance Code 001 (88881.700905/2022‐01; 88887.916694/2023‐00), and FAPEMIG (BPD‐00133‐22). U.M.S. received funding from the Danish Council for Independent Research (4004‐00485B, 0134‐00297B) and the Novo Nordisk Foundation (6239, 0058592). The authors declare no conflict of interest. |
Preliminary Investigation Towards a Safety Tool for Swine Brucellosis Diagnosis by a Proteomic Approach Within the One-Health Framework | d6f07d5b-dfda-41e0-bf27-b3b7a8d41b64 | 11855111 | Biochemistry[mh] | Brucella spp. are Gram-negative coccobacillus bacteria that cause diseases in various animal species, including humans . In domestic animals, the disease occurs as a chronic infection which results in placentitis and abortion in pregnant females, and orchitis and epididymitis in males, causing significant economic losses in livestock farms . Brucella spp. can persist and replicate within the phagocytic cells of the reticuloendothelial system and in non-phagocytic cells such as trophoblasts . When the vacuoles containing Brucella -individuals are fused with lysosome for the bacteria degradation, the lysosomal proteins are excluded, and the Brucella -containing vacuoles are associated with the endoplasmic reticulum which represents the intracellular replication site for Brucella . Among the twelve known Brucella species, the most frequent agents of brucellosis in livestock and humans are Brucella melitensis , Brucella abortus , and Brucella suis . Several biovars of these Brucella species exist, and it is possible to distinguish five biovars of B. suis . Although B. melitensis and B. abortus can be transmitted to pigs because of contact with ruminants, swine brucellosis is mainly caused by B. suis , biovars 1, 2, and 3 . B. suis bv. 1 and 3 are rarely reported in Europe while B. suis bv. 2 is largely diffused in East Europe. It was also introduced in Italy, where it was detected in domestic pigs and wild boars . However, in Italy, Bertelloni and colleagues reported that swine brucellosis seems to have a very limited spread in intensive farms. B. suis bv. 2 recognizes as principal hosts swine and hares, but it has been also detected in cows, causing seroconversion to traditional tests for bovine brucellosis, without clinical signs . Human infections by B. suis bv. 2 were rarely reported . Traditional methods for the diagnosis of brucellosis include bacteria isolation and characterization from biological samples, and serological tests. In addition, several molecular methods including PCR, PCR-restriction fragment length polymorphism (RFLP), and Southern blot, allowed, to a certain extent, the differentiation of Brucella species and some of their biovars . Serological methods are often employed in control and eradication programs to initially identify the possible positive animals. These methods are based on the detection of antibodies against the lipopolysaccharides (sLPS) of smooth Brucella strains generated by infected animals . The monoclonal antibodies against A and M antigens recognize the smooth LPS of B. suis strains; however, the first does not recognize B. melitensis strains and second does not bind to B. abortus strains . The Rose Bengal test (RBT), complement fixation test (CFT), indirect/competitive enzyme-linked immunosorbent assay (I/C ELISA), and fluorescence polarization assay (FPA) are the validated serological tests commonly used for swine brucellosis diagnosis . The serologic tests used to diagnose brucellosis were mostly developed for the detection of the A dominant, B. abortus O side-chain in infected cattle; consequently, these diagnostic tests have lower sensitivity and specificity when applied in swine than in cattle . In general, serological tests present some limitations, mainly concerning specificity and sensitivity, especially when screening individual animals . For these reasons test interpretation is generally conducted at a group or herd level, requiring bacterial isolation or molecular assays to confirm serologic data . Other Gram-negative bacteria, namely Escherichia coli O157:H7, Vibrio cholerae O1, Salmonella group N (O:30), and Yersinia enterocolitica O:9, can induce the production of antibodies that cross-react with the Brucella sLPS antigens . Particularly, Y. enterocolitica is widespread in swine populations and has an O-antigen LPS chain nearly identical to that of Brucella , resulting in a significant number of false positive serological reactions . The RBT is used as a screening test, but it lacks specificity for discriminating reactions caused by smooth Brucella from other bacteria cross-reactions . The CFT is generally used as a confirmatory test, but it has a reduced sensitivity for B. suis infection diagnosis, and it is affected by cross-reactions with other bacteria . The FPA resulted in a very good performance test but in chronically infected animals reported a low sensitivity, as well as in other serological tests .To overcome these shortcomings, the development of alternative immunoblotting methods is being investigated to increase the specificity and sensitivity of serological tests for brucellosis diagnosis on a rough strain of Brucella melitensis (88/131) ; on outer membrane proteins (OMPs) of the Rev 1 strain of B. melitensis ; and on an extract of B. abortus and B. melitensis . In all these techniques, the authors had to cultivate the bacteria, exposing operators to the risk of infection since Brucella can easily infect the operator by airborne transmission . Among the new techniques tested, one includes the use of Brucellergene OCB (Rhône-Mérieux, Lyon, France) which is a commercial antigen produced from B. melitensis B115, previously employed in swine for in vitro serological tests, such as ELISA , and for in vivo skin tests , showing significant specificity and the ability to discriminate false positive serological reactions. Brucellergene OCB is a mixture of more than 20 cytoplasmic proteins including T-cell antigens, Brucella bacterioferritin, and P39 proteins prepared from a rough (deficient in smooth LPS) mutant of Brucella melitensis B115 . Bertelloni and colleagues used Brucellergene as a tool to detect brucellosis-affected animals by Dot Blot, confirming its validity and ease of use in swine brucellosis serological diagnosis. Although the use of Brucellergene is risk-free for operators, this technique also requires the presence of Brucella in the laboratory, exposing operators to the risk of infection. This work aims to identify Brucella antigenic proteins in Brucellergene as a starting point for the development of safer immunological techniques for Brucella screening. reports an SDS-PAGE of a Brucellergene OCB sample. Sixteen protein bands with a molecular weight of 115 (B1), 84 (B2), 55 (B3), 50 (B4), 48 (B5), 44 (B6), 39 (B7), 34 (B8), 33 (B9), 31 (B10), 29 (B11), 27 (B12), 20 (B13), 13 (B14), 12 (B15), and 11 (B16) kDa were observed. In addition to the SDS-PAGE of the Brucellergene OCB sample, 2D electrophoresis was also performed . In 2D electrophoresis, at least 20 spots were detected with molecular weights corresponding to those obtained in the bands of the SDS-PAGE, while the isoelectric point ranged between pH 4.8 and 7.8. The results of the Western Blot applied to SDS-PAGE are reported in . In a,b, three bands which correspond to B3, B13, and B16 and with 55, 20, and 11 kDa of molecular weight, respectively, were able to bind positive anti- Brucella swine serum. The Western Blot of the 2D gel of Brucellergene on nitrocellulose did not show spots corresponding to the electrophoretic gel ( and ). The strip resulting from isoelectrofocusing was directly blotted, showing a band named I1 at an isoelectric point around pH 5.5–6 ( c). Proteins identified by mass spectrometry are shown in . The use of Brucellergene OCB by Dot Blot as a tool to detect brucellosis-affected animals has already been investigated by Bertelloni and colleagues . Bertelloni and colleagues tested 374 swine sera for brucellosis using the Rose Bengal Test (RBT), complement fixation test (CFT), and Dot Blot, using Brucellergene as an antigen. To verify the concordance of Dot Blot using CFT as the gold standard, they observed a concordance value of at least 91% . Y. enterocolitica is mainly responsible for cross-reactions in swine . The Dot Blot, using Brucellergene as an antigen and an anti- Yersinia enterocolitica serum as an antibody, did not show cross-reaction, suggesting a promising specificity . Since the Brucellergene bound the anti- Brucella swine serum but did not bind the anti- Yersinia serum, we investigated by Western Blot which proteins within those contained in Brucellergene could bind Brucella -positive swine serum. It was assumed that these proteins were not able to cross-react with the anti- Yersinia serum as well as the whole of Brucellergene. The Brucella proteome has already been studied by several authors . Hamidi and colleagues developed a ribosomal proteome-based mapping for the establishment of biomarker profile libraries to identify B. abortus and B. melitensis , as well as elucidating refined differences between virulent and vaccine sstrains. To the best of our knowledge, Brucellergene had never been investigated using a proteomic approach. In agreement with several authors who have previously investigated the Brucella proteome, the present results highlighted B. melitensis proteins with molecular weights in the range of 10–116 kDa . Regarding the 2D SDS-PAGE, no matching spots to bind to Brucella -sera were found by the Western Blot approach. This is because chemiluminescent detection is more sensitive and produces signal at lower protein concentrations than the staining of the gel with coomassie G250, according to the product sheet provided by the company . It can be speculated that a milder treatment of the Brucellergene might preserve the proteins in their native forms and thus might improve the resolution of the Western Blot. Further investigations are therefore needed to clarify this aspect. Among the detected bands, only those which reacted with the Brucella -serum in the Western Blot were identified by Mass Spectrometry. These bands corresponded to four proteins identified as follows: a probable sugar-binding protein, a peptide ABC transporter substrate-binding protein, a GntR family transcriptional regulator, and a conserved hypothetical protein. A class of sugar-binding proteins with molecular weight and isoelectric points corresponding to the protein found by this investigation (probable sugar-binding periplasmic protein B. abortus str 2308A) has been also observed overexpressed in the proteome of Rev1 (an attenuated strain of B. melitensis ) . Rev 1 is considered a highly effective vaccine in the control of brucellosis in small ruminants in many countries . A sugar-binding protein with a similar IP and molecular weight has been also detected in both Rev 1 and in a B. melitensis virulent strain, 16M . To the best of our knowledge, this protein in terms of amino acid composition is not like any cloned Yersinia enterocolitica protein (% homology 0%). The second protein identified belongs to ATP-binding cassette (ABC) transporters, a large group of membrane protein complexes that couple the transport of a substrate across the membrane to the hydrolysis of ATP . In prokaryotes, ABC transporters are localized to the plasma membrane, and ATP is hydrolyzed on the cytoplasmic side . Furthermore, ABC transporters are characterized by two nucleotide-binding domains (NBDs) and two transmembrane domains (TMDs) . An ABC transporter acts as a transporter of different molecules across biological membranes and participates in a variety of biological processes, such as maintaining osmotic pressure balance inside and outside the cell, antigen presentation, cell differentiation, and bacterial immunity . A protein at about 60 kDa binding all sera was also found by Wareth and colleagues , applying a Western Blot to an extract of B. abortus and B. melitensis using cattle, buffaloes, sheep, and goat sera as primary antibodies. This protein could correspond to the molecular weight of the protein identified in this investigation as the peptide ABC transporter substrate-binding protein. When comparing the amino acid sequence of the probable sugar-binding protein with other proteins, a percentage of 99.52% homology with the peptide ABC transporter substrate-binding protein was observed. It could be speculated that Brucella -positive swine serum might bind to these two proteins in a similar portion of the amino acid sequences. Comparing the amino acid sequence of peptide ABC transporter substrate-binding protein with other proteins, a percentage of homology higher than 86% was only observed with proteins belonging to the genus Brucella , thus suggesting that this is a genus-specific protein. The peptide ABC transporter substrate-binding protein is similar in terms of amino acid composition to cloned Yersinia enterocolitica proteins, with a percentage of homology of lower than 41%. The third protein identified is a GntR regulator, an important virulence factor in Brucella playing important roles in the maintenance of fatty acid concentrations, amino acid catabolism, organic acids production, the regulation of carbon catabolism and the degradation of complex organics . Furthermore, some research indicates that GntR mutants show reduced virulence . The fourth protein identified is a conserved hypothetical protein. Wagner and colleagues had already identified in the B. melitensis proteome several hypothetical low-molecular-weight proteins whose function, to the best of our knowledge, is undefined. Comparing the amino acid sequence of the GntR family transcriptional regulator or conserved hypothetical protein with other proteins, a percentage of homology higher than 82% or 70%, respectively, was only observed with proteins belonging to the genus Brucella , thus suggesting that they are genus-specific proteins. The GntR family transcriptional regulator protein is similar in terms of amino acid composition to Yersinia enterocolitica proteins with a percentage of homology lower than 49% while the conserved hypothetical protein is not like any Yersinia enterocolitica protein cloned. Concerning the subcellular localization prediction, both probable sugar-binding protein and ABC transporters belong to periplasmic protein . Brucellae can reversibly modify their cell envelope to adapt to changes in the host intracellular microenvironment and improve their survival by modifying the host immune response . Zai and colleagues investigated the resistance of B. abortus to various stresses (e.g., antibacterial stress, nutrient starvation stress, and physicochemical stress) and observed that some proteins, including the ABC transporter ones, were still produced by the bacterium despite the stressful conditions. As they are also expressed under stress conditions for the bacterium, they could be a target for antibodies produced by the infected animal. It could therefore be speculated that ABC transporter proteins may be a target for a probable Brucella identification test. Some authors have investigated the proteomes of some Brucella species and virulent/attenuated strains to search for species-specific proteins as a basis for new diagnostic screening methods . Eschenbrenner and colleagues investigated the B. melitensis vs. B. abortus proteomes and observed the presence of ABC transporter proteins in both species. This could suggest that ABC transporter proteins are not species-specific proteins and therefore could be further investigated as possible antigens to produce a general immunological kit for the identification of Brucella infections. The “probable sugar-binding periplasmic protein B. abortus str 2308A”, “peptide ABC transporter substrate-binding protein B. melitensis ”, “GntR family transcriptional regulator B. melitensis ”, and “conserved hypothetical protein B. melitensis M28” identified in this work could be produced in vitro, providing the basis for the development of a diagnostic kit to avoid Brucella culture required for large-scale antigen production. In vitro synthesis of the protein could be performed in a molecular biology laboratory by drawing ad hoc primers and cloning the protein via a vector into cell cultures (e.g., Escherichia coli ). After purification by SDS-PAGE and specific columns, the protein could be tested by Dot Blot with anti- Brucella -positive swine serum. 4.1. Material Since Bertelloni and colleagues reported that only positive serum had a cross-reaction with Brucellergene, only positive serum, from a free-ranged farm of “cinta senese” pigs, in South Tuscany (Siena province, Italy), was used in this investigation. Employed serum was stocked at −20 °C until processed. The antigen was the Brucellergene OCB (Rhône-Mérieux, France), produced from B. melitensis rough strain B115, provided by “Istituto Zooprofilattico della Lombardia e dell’Emilia Romagna Bruno Ubertini, Brescia, Italy” and by “Istituto Zooprofilattico Sperimentale dell’Abruzzo e del Molise G. Caporale, Teramo, Italy” for Western Blot (WB). 4.2. Sodium Dodecyl Sulphate—PolyAcrylamide Gel Electrophoresis (SDS-PAGE) The Brucellergene total protein content was measured by Qubit 2.0 Fluorometer (Invitrogen, Waltham, MA, USA). Ten µg of total protein of Brucellergene was loaded into 7.5% T, 2.6% C separating polyacrylamide gels (1.5 mm thick). A 10–250 kDa pre-stained protein Sharpmass TM V plus protein MW marker (Euroclone, Pero, Italy) was also carried out. SDS-PAGE was performed at 20 mA/gel and at 15 °C using SE 260 mini vertical electrophoresis (GE Healthcare, Chicago, IL, UK). 4.3. 2DE (Two-Dimensional Gel Electrophoresis) SDS-PAGE Isoelectric focusing electrophoresis was performed at 20 °C on an IPGphor III apparatus (GE Healthcare) following a previously reported protocol . The volumes of protein extract corresponding to 75–150 μg of total proteins were mixed with a rehydration solution (Urea 7 M, Thiourea 2 M, Chaps 2%, dithiothreitol 0.5%, IPG 1%, and a trace of Bromophenol blue) and loaded on 7 cm (pH 3–10) strips, rehydration time 9 h, 50 mA/strip. For some strips, the Western Blot method was applied. Prior to SDS-PAGE, the IPG strips were equilibrated for 8 min in 50 mM Tris-HCl pH 8.8, 30% Glycerol, 6 M Urea, 4% SDS, 2% dithiothreitol and afterwards for 12 min in 50 mM Tris-HCl pH 6.8, 30% Glycerol, 6M Urea, 4% SDS, 2.5% Iodoacetoamide and bromophenol blue. The following SDS-PAGE was performed using self-cast 7.5% T, 2.6% C separating polyacrylamide gels according to Laemmli , without stacking gel. 4.4. Western Blot (WB) Considering an SDS-PAGE where two gels run, proteins were fixed in one of the gels by 40% methanol and 10% acetic acid solution for 30 min. Gel was stained in Coomassie brilliant G colloidal solution , discolored with water, scanned by an Epson Perfection V750 Pro (Suwa, Nagano, Japan), and elaborated by ImageJ software, version 1.54 . For the second gel, proteins were transferred to a nitrocellulose membrane (0.45 µm size, Thermo Scientific, Waltham, MA, USA) by ECL TE 70 PWR Semi-dry transfer unit (GE Healthcare), 0.8 mA/cm 2 , for 4 h and 30 min. Western blot was performed according to Iovinella et al. , with modifications. The membrane was exposed to serum samples at 1:200 concentrations, with 30 min of incubation time in a dark place and with inactivation at 58 °C ± 2 °C for 60 min. Afterwards, the membrane was incubated for 1 h at RT with a polyclonal rabbit anti-Pig IgG-(H+L) antibody, HRP conjugated (Bethyl Laboratories, Montgomery, TX, USA) diluted 1:10,000. The reaction was detected by ClarityTM Western ECL substrate Kit (Biorad Laboratories, Hercules, CA, USA). Twenty seconds of exposure to a Nikon D5100 camera (Tokyo, Japan) fitted with a 50 mm f/1.4 lens and 12 mm extension tube, detected the chemiluminescent signal in a dark room . 4.5. Mass Spectrometry The bands corresponding to those that reacted with the antibody in the Western Blot were selected into the gel, excised and sent to a Mass Spectrometry Center (CISM, Florence University, Florence, Italy) where mass spectrometry was applied, and proteins were identified. The excised bands were destained and proteins digested as reported by Dani et al. . Each peptide mixture was submitted to capillary-LC-μESI-MS/MS analysis on an Ultimate 3000 HPLC (Dionex, San Donato Milanese, Milan, Italy) coupled to a LTQ Orbitrap mass spectrometer (Thermo Fisher, Bremen, Germany). Peptides were concentrated on a precolumn cartridge PepMap100 C18 (300 μm id × 5 mm, 5 μm, 100 Å, LC Packings Dionex, Sunnyvale, CA, USA) and then eluted on a homemade capillary column packed with Aeris Peptide XB-C18 phase (180 μm id × 15 cm, 3.6 μm, 100 Å, Phenomenex, Torrance, CA, USA) at 1 μL/min. The loading mobile phases were as follows: 0.1% TFA in H 2 O (phase A) and 0.1% TFA in CH 3 CN (phase B). The elution mobile phases composition was H 2 O 0.1% formic acid/CH 3 CN 97/3 (phase A) and CH 3 CN 0.1% formic acid/ H 2 O 97/3 (phase B). The elution program was as follows: 0 min, 4% B; 10 min, 40% B; 30 min, 65% B; 35 min, 65% B; 36 min, 90% B; 40 min, 90% B; 41 min, 4% B; 60 min, 4% B. Mass spectra were acquired in positive ion mode, setting the spray voltage at 1.8 kV and the capillary voltage and temperature at 45 V and 200 °C, respectively, and the tube lens at 130 V. Data were acquired in data-dependent mode with dynamic exclusion enabled (repeat count 2, repeat duration 15 s, exclusion duration 30 s); survey the MS scans that were recorded in the Orbitrap analyzer in the mass range 300–2000 m / z at a 15,000 nominal resolution at m / z = 400; then up to three of the most intense ions in each full MS scan were fragmented (isolation width 3 m / z , normalized collision energy 30) and analyzed in the IT analyzer. Monocharged ions did not trigger MS/MS experiments. The acquired data were searched with Mascot 2.4 search engine (Matrix Science Ltd., London, UK) against Brucella protein sequences downloaded from NCBI. Since Bertelloni and colleagues reported that only positive serum had a cross-reaction with Brucellergene, only positive serum, from a free-ranged farm of “cinta senese” pigs, in South Tuscany (Siena province, Italy), was used in this investigation. Employed serum was stocked at −20 °C until processed. The antigen was the Brucellergene OCB (Rhône-Mérieux, France), produced from B. melitensis rough strain B115, provided by “Istituto Zooprofilattico della Lombardia e dell’Emilia Romagna Bruno Ubertini, Brescia, Italy” and by “Istituto Zooprofilattico Sperimentale dell’Abruzzo e del Molise G. Caporale, Teramo, Italy” for Western Blot (WB). The Brucellergene total protein content was measured by Qubit 2.0 Fluorometer (Invitrogen, Waltham, MA, USA). Ten µg of total protein of Brucellergene was loaded into 7.5% T, 2.6% C separating polyacrylamide gels (1.5 mm thick). A 10–250 kDa pre-stained protein Sharpmass TM V plus protein MW marker (Euroclone, Pero, Italy) was also carried out. SDS-PAGE was performed at 20 mA/gel and at 15 °C using SE 260 mini vertical electrophoresis (GE Healthcare, Chicago, IL, UK). Isoelectric focusing electrophoresis was performed at 20 °C on an IPGphor III apparatus (GE Healthcare) following a previously reported protocol . The volumes of protein extract corresponding to 75–150 μg of total proteins were mixed with a rehydration solution (Urea 7 M, Thiourea 2 M, Chaps 2%, dithiothreitol 0.5%, IPG 1%, and a trace of Bromophenol blue) and loaded on 7 cm (pH 3–10) strips, rehydration time 9 h, 50 mA/strip. For some strips, the Western Blot method was applied. Prior to SDS-PAGE, the IPG strips were equilibrated for 8 min in 50 mM Tris-HCl pH 8.8, 30% Glycerol, 6 M Urea, 4% SDS, 2% dithiothreitol and afterwards for 12 min in 50 mM Tris-HCl pH 6.8, 30% Glycerol, 6M Urea, 4% SDS, 2.5% Iodoacetoamide and bromophenol blue. The following SDS-PAGE was performed using self-cast 7.5% T, 2.6% C separating polyacrylamide gels according to Laemmli , without stacking gel. Considering an SDS-PAGE where two gels run, proteins were fixed in one of the gels by 40% methanol and 10% acetic acid solution for 30 min. Gel was stained in Coomassie brilliant G colloidal solution , discolored with water, scanned by an Epson Perfection V750 Pro (Suwa, Nagano, Japan), and elaborated by ImageJ software, version 1.54 . For the second gel, proteins were transferred to a nitrocellulose membrane (0.45 µm size, Thermo Scientific, Waltham, MA, USA) by ECL TE 70 PWR Semi-dry transfer unit (GE Healthcare), 0.8 mA/cm 2 , for 4 h and 30 min. Western blot was performed according to Iovinella et al. , with modifications. The membrane was exposed to serum samples at 1:200 concentrations, with 30 min of incubation time in a dark place and with inactivation at 58 °C ± 2 °C for 60 min. Afterwards, the membrane was incubated for 1 h at RT with a polyclonal rabbit anti-Pig IgG-(H+L) antibody, HRP conjugated (Bethyl Laboratories, Montgomery, TX, USA) diluted 1:10,000. The reaction was detected by ClarityTM Western ECL substrate Kit (Biorad Laboratories, Hercules, CA, USA). Twenty seconds of exposure to a Nikon D5100 camera (Tokyo, Japan) fitted with a 50 mm f/1.4 lens and 12 mm extension tube, detected the chemiluminescent signal in a dark room . The bands corresponding to those that reacted with the antibody in the Western Blot were selected into the gel, excised and sent to a Mass Spectrometry Center (CISM, Florence University, Florence, Italy) where mass spectrometry was applied, and proteins were identified. The excised bands were destained and proteins digested as reported by Dani et al. . Each peptide mixture was submitted to capillary-LC-μESI-MS/MS analysis on an Ultimate 3000 HPLC (Dionex, San Donato Milanese, Milan, Italy) coupled to a LTQ Orbitrap mass spectrometer (Thermo Fisher, Bremen, Germany). Peptides were concentrated on a precolumn cartridge PepMap100 C18 (300 μm id × 5 mm, 5 μm, 100 Å, LC Packings Dionex, Sunnyvale, CA, USA) and then eluted on a homemade capillary column packed with Aeris Peptide XB-C18 phase (180 μm id × 15 cm, 3.6 μm, 100 Å, Phenomenex, Torrance, CA, USA) at 1 μL/min. The loading mobile phases were as follows: 0.1% TFA in H 2 O (phase A) and 0.1% TFA in CH 3 CN (phase B). The elution mobile phases composition was H 2 O 0.1% formic acid/CH 3 CN 97/3 (phase A) and CH 3 CN 0.1% formic acid/ H 2 O 97/3 (phase B). The elution program was as follows: 0 min, 4% B; 10 min, 40% B; 30 min, 65% B; 35 min, 65% B; 36 min, 90% B; 40 min, 90% B; 41 min, 4% B; 60 min, 4% B. Mass spectra were acquired in positive ion mode, setting the spray voltage at 1.8 kV and the capillary voltage and temperature at 45 V and 200 °C, respectively, and the tube lens at 130 V. Data were acquired in data-dependent mode with dynamic exclusion enabled (repeat count 2, repeat duration 15 s, exclusion duration 30 s); survey the MS scans that were recorded in the Orbitrap analyzer in the mass range 300–2000 m / z at a 15,000 nominal resolution at m / z = 400; then up to three of the most intense ions in each full MS scan were fragmented (isolation width 3 m / z , normalized collision energy 30) and analyzed in the IT analyzer. Monocharged ions did not trigger MS/MS experiments. The acquired data were searched with Mascot 2.4 search engine (Matrix Science Ltd., London, UK) against Brucella protein sequences downloaded from NCBI. Four proteins able to bind Brucella -positive swine serum were identified (a probable sugar-binding protein, a peptide ABC transporter substrate-binding protein, a GntR family transcriptional regulator, and a conserved hypothetical protein) by proteomic and Western Blot approaches. All of them could be exploited to enhance the specificity of serological investigations. Among these proteins, the peptide ABC transporter substrate seems the most promising one to be used as a specific antigen because Brucella can produce it even under stress conditions. Although Brucellergene is safe to handle, standardized, and already potentially useful for the serological investigation of Brucella by Dot Blot, it requires, however, the cultivation of Brucella in the laboratory. As future steps for serological assays in swine brucellosis, the most suitable antigenic proteins could be synthesized in vitro, avoiding the cultivation of the Brucellae and thus reducing the risk of infection for operators by airborne transmission. Further investigation will be then needed to test these proteins and verify whether they can provide a safety tool for serological diagnosis in screening diagnostic for swine brucellosis, breeding screening or monitoring plans. |
Subsets and Splits